we build ai chips
with the highest
intelligence density

Drawing of SEMRON's chip with four layers.
Mobile Phone FrameMobile Phone Colored

run
every network
everywhere

same network
same performance
smaller footprint

same network

Simplistic drawing of a multi layer chip.

our chip

same performance

Simplistic drawing of a one layer chip.

competitor's chip

smaller footprint

Drawing of SEMRON's chip with four layers (first layer).Drawing of SEMRON's chip with four layers (fourth layer).Drawing of SEMRON's chip with four layers (third layer).Drawing of SEMRON's chip with four layers (second layer).
Drawing of a crossbar in SEMRON's chip.
Drawing of SEMRON's capRAM device.

edge ai beyond energy efficiency

Our mission is to create ai chips with the smallest footprint without compromising on raw compute power. For this we need extraordinary compute density.

We will enable new ai driven devices and remove the roadblocks that today impair the mass adoption of these products (i.e. smart glasses).

3d enables the future

To achieve the highest density possible, we need to overcome traditional scaling bottlenecks.

This means that we ditch the reliance on conventional semiconductor technology scaling, a lot sooner than everybody else, and keep driving Moore's law.

Drawing of SEMRON's chip with four layers (second layer).Drawing of SEMRON's chip with four layers (third layer).Drawing of SEMRON's chip with four layers (fourth layer).
Drawing of SEMRON's chip with four layers (first layer).

We're growing beyond the conventional 2D silicon. We efficiently scale volume while saving area.

chip = 1 / penguin

Our favorite analogy for this is that we are the opposite of a penguin, because they are compact to keep the heat in, while we want chips to be more compact, but still get the heat out.

This means, that the more efficient the computation the better the compute density.

it starts w/ in-memory computing

The first step to solve the thermal density problem is analog in-memory computing.

This means that we do the required multiplications directly where the data is, instead of moving it to the computational units every time.

This works because most of the data needed for calculating results of neural networks stays the same, and there is drastically less changing data per operation than there is static data. At its core, calculating common DL models is mainly about vector matrix multiplications (don't worry, our technology covers modern architectures as well).

Drawing of a crossbar in SEMRON's chip.

And better yet, having a multi-bit analog memory cell means that we can save a lot of space.

what sets us apart

Of course we are not the only ones to recognize these basic facts. We noticed that other available solutions, even for analog in-memory computing are merely sufficient to solve the overheating problem of one or few compute layers.

At the core of the differentiation from all existing approaches is our CapRAM™ technology, an innovative semiconductor device, that stores the weights for matrix multiplication as a variable capacitance.

Drawing of SEMRON's capRAM device.

Since CapRAM™ is based on capacitive read-out, the noise for calculations is fundamentally lower than in all other technologies. However, the noise level sets the upper limit for the energy efficiency.

No matter whether SRAM (which is not eligible for monolithic growth), emerging memories such as phase change memories (PCM) or other resistive technologies are used, they will never reach CapRAM™'s energy efficiency, and thus intelligence density.

Founders

Even though SEMRON was only founded in 2020, Kai and Aron have known each other for over 10 years now. As students at the Technical University of Dresden, they began discussing what an ideal device for deep learning might look like, pushing aside all previous beliefs. In 2015, while they were still studying, they began developing CapRAM™ in various laboratories.

Picture of Aron KirschenSketch of Aron Kirschen
Aron Kirschen
CEO & Co-Founder
 Image of Kai-Uwe DemasiusSketch of Kai-Uwe Demasius
Kai-Uwe Demasius
CTO & Co-Founder

Advisors

Picture of Karl Heinz StegemannSketch of Karl-Heinz Stegemann
Karl-Heinz Stegemann

Karl-Heinz probably knows more than anybody else about semiconductor process development in the analog domain. With him on board you can be sure, we will master any semiconductor challenge.

Picture of Greg TaylorSketch of Greg Taylor
Greg Taylor

As a retired Intel fellow (of only a few Intel fellows world-wide), Greg is our analog IC design genius. If you are a Designer at SEMRON, you will get the support and guidance from Greg and benefit from his vast and deep experience in solving technical challenges which appear to be insurmountable.

Picture of Pieter AbbeelSketch of Pieter Abbeel
Pieter Abbeel

If you search for the world's leading researcher in reinforcement learning, you will immediately come across Pieter. He not only opens doors to the who's who of AI research, but also has a lot of experience in leading start-ups himself.

Image of Prof. Jeronimo CastrillonSketch of Prof. Jeronimo Castrillon
Jeronimo Castrillon

Jeronimo leads an internationally renown compiler team with pioneering research that bridges high-level programming and emerging computing systems. His wide network and unique research experience with innovative hardware architectures allows SEMRON to iterate at maximum speed.

Get in touch
with us to join our
alpha customer program.

We'll reach out to you
if you meet our
requirements.

Thank you! Your submission has been received!
Something went wrong while submitting the form.