On Mar 1, Oskar Mencer (CEO) spoke at the CRIBB seminar at MIT.
Abstract:
Complexity of computation is a function of the underlying
representation. We are extending this basic concept to consider
representation of computational problems on the application level, the
model level, the architecture level, arithmetic level and gate level
of computation. In particular, the first step is to consider and
optimize the discretization of a problem in time, space and value.
Discretization of value is particularly painful, both in Physics where
atomic discretization ruins many nice theories, and in computation,
where most people just blindly use IEEE double precision floating
point so they don’t have to worry about details, until they do.
Multiscale Dataflow Computing provides a process by which one can
optimize the discretization of time, space and value based on a
particular underlying computer architecture, and in fact, iterate the
molding of the computer architecture and the discretization of the
computational challenge.
The above methods have been able to achieve 10-50x faster computation
per cubic foot and per Watt, resulting in less nodes per computation
and therefore exponentially improved reliability and resiliency.
Results published by users worldwide include financial modelling
(American Finance Technology Award for most cutting edge technology,
2011), commercial deployment in the Oil&Gas industry (see Society of
Exploration Geophysicists meetings and reports), weather modelling
(reducing time to compute a Local Area Model – LAM from 2 hours to 2
minutes) and even sparse matrix solvers which can not be parallelized,
running 20-40x faster.