we build ai chips
with the highest
intelligence density
run
every network
everywhere
same network
same performance
smaller footprint
same network
our chip
same performance
competitor's chip
smaller footprint
Our mission is to create ai chips with the smallest footprint without compromising on raw compute power. For this we need extraordinary compute density.
We will enable new ai driven devices and remove the roadblocks that today impair the mass adoption of these products (i.e. smart glasses).
To achieve the highest density possible, we need to overcome traditional scaling bottlenecks.
This means that we ditch the reliance on conventional semiconductor technology scaling, a lot sooner than everybody else, and keep driving Moore's law.
We're growing beyond the conventional 2D silicon. We efficiently scale volume while saving area.
Our favorite analogy for this is that we are the opposite of a penguin, because they are compact to keep the heat in, while we want chips to be more compact, but still get the heat out.
This means, that the more efficient the computation the better the compute density.
The first step to solve the thermal density problem is analog in-memory computing.
This means that we do the required multiplications directly where the data is, instead of moving it to the computational units every time.
This works because most of the data needed for calculating results of neural networks stays the same, and there is drastically less changing data per operation than there is static data. At its core, calculating common DL models is mainly about vector matrix multiplications (don't worry, our technology covers modern architectures as well).
And better yet, having a multi-bit analog memory cell means that we can save a lot of space.
Of course we are not the only ones to recognize these basic facts. We noticed that other available solutions, even for analog in-memory computing are merely sufficient to solve the overheating problem of one or few compute layers.
At the core of the differentiation from all existing approaches is our CapRAM™ technology, an innovative semiconductor device, that stores the weights for matrix multiplication as a variable capacitance.
Since CapRAM™ is based on capacitive read-out, the noise for calculations is fundamentally lower than in all other technologies. However, the noise level sets the upper limit for the energy efficiency.
No matter whether SRAM (which is not eligible for monolithic growth), emerging memories such as phase change memories (PCM) or other resistive technologies are used, they will never reach CapRAM™'s energy efficiency, and thus intelligence density.
Even though SEMRON was only founded in 2020, Kai and Aron have known each other for over 10 years now. As students at the Technical University of Dresden, they began discussing what an ideal device for deep learning might look like, pushing aside all previous beliefs. In 2015, while they were still studying, they began developing CapRAM™ in various laboratories.
Karl-Heinz probably knows more than anybody else about semiconductor process development in the analog domain. With him on board you can be sure, we will master any semiconductor challenge.
As a retired Intel fellow (of only a few Intel fellows world-wide), Greg is our analog IC design genius. If you are a Designer at SEMRON, you will get the support and guidance from Greg and benefit from his vast and deep experience in solving technical challenges which appear to be insurmountable.
If you search for the world's leading researcher in reinforcement learning, you will immediately come across Pieter. He not only opens doors to the who's who of AI research, but also has a lot of experience in leading start-ups himself.
Get in touch
with us to join our
alpha customer program.
We'll reach out to you
if you meet our
requirements.