Congratulations to both Nervana Systems and Intel on what will be an extraordinary collaboration to power the deep learning revolution
In 1969 -- Nearly half a century ago -- Intel delivered their first product: a revolutionary 64 bit(!) SRAM. The little known Intel 3101 was one of many firsts for the fledgling Mountain View startup; the first DRAM and microprocessor would soon follow. Since then Intel devices have become the silicon substrate on which modern civilization runs.
At the dawn of a new era in computing it is only fitting that Nervana and Intel should join forces. The Nervana Engine, their chip specifically designed to accelerate machine learning in the cloud, will have more than a billion times as much memory as the Intel 3101 and terabits of I/O bandwidth when it launches early 2017. It is in many ways as revolutionary as the chips Intel created in the first days of modern computing.
Nervana’s leadership team has a unique blend of hardware, software and neuroscience expertise. They have a clear perspective on exactly what is required to build a state of the art hardware accelerator, a real appreciation of the full software stack from firmware to cloud and an unusual concentration of talent in the neuroscience inspired algorithms at the heart of the deep learning revolution. When we met them a little over a year ago we knew that the deep learning revolution would have broad implications for artificial intelligence and the future of connected devices. We had no choice but to invest!
The deep learning revolution
Nervana’s vision of hardware-accelerated deep learning as a service requires expertise at every level of the technology stack, from custom silicon to algorithm development, from neuroscience to the cloud.
Neon, Nervana’s open source deep learning library optimizes primitives down to the GPU firmware. It is currently the worlds fastest and is already being used by customers to quickly build, train and deploy AI solutions in medicine, genomics, finance, agriculture, automotive and other industries.
The Nervana Engine is a hardware accelerator specifically designed for deep learning which will be available early next year. It offers an order of magnitude increase in compute density over GPUs and will seamlessly integrate with the Neon API running in the Nervana Cloud.
At the boundary of hardware and software
The co-evolution of hardware and software is especially evident in deep learning. GPUs were created to render graphics not run deep learning algorithms, and while are faster than conventional CPUs they are far from optimal. In fact one of Nervana’s early technical breakthroughs was to write optimized GPU code better suited to machine learning, yielding a 3x improvement over the GPU vendor's own library.
But to get another order of magnitude improvement the hardware needs to evolve. More concise datatypes, high bandwidth memory for storing coefficients (7X more than a GPU) and high speed interconnects in the Nervana Engine closely match the requirements of current and emerging algorithms.
This investment in hardware gives Nervana a unique platform for learning and inference at scale. Algorithms can now discover features in data automatically at unprecedented speed. These techniques are being continuously applied to new domains and will displace most of the explicitly designed software we use today.
On the horizon
Nervana's powerhouse partnership with Intel will give them access to unequalled technology and experience. We wish them every success and can’t wait to see their work used by millions of people. They are revolutionizing classical computation.
We have loved having Nervana as part of the Playground family, co-located with other portfolio companies in our Palo Alto facility. Their good humor, brilliant insights and exemplary foosball skills will be sorely missed. They will always be welcome guests at our Friday afternoon Playdates.