SambaNova New Silicon Targets Foundation Models
SambaNova has announced a new RDU with 640 MB SRAM, and 688 teraflops at BF16. The second generation of SambaNova systems also offers a smaller process node and a new chip interface. In addition, researchers at Lawrence Livermore National Laboratory are upgrading their systems to the next generation.
SambaNova’s second-generation RDU has 640 MB of SRAM
The company says its RDU has twice the memory bandwidth and double the compute capacity. The company calls the chip the Reconfigurable Dataflow Unit (RDU), and it uses a multi-die packaging scheme. This gives it a much larger memory bandwidth and twice the compute capacity than the previous RDU, but the company didn’t go into detail about what kind of memory is used.
The second-generation RDU provides up to six times faster performance on certain AI workloads, according to SambaNova. The company said it shipped the unit today. Its performance is even better than that of Nvidia GPUs, which have been the standard for training machines for more than a decade. However, its DRAM is slower than HBM3’s.
688 teraflops at BF16
SambaNova Systems has released a new artificial intelligence chip that triples the speed of its previous version. The company is a SoftBank-backed company that aims to make AI chips that can run applications in a matter of minutes. It is one of several companies competing in this space, including Nvidia Corp., which dominates the AI market. SambaNova’s strategy is to offer a powerful chip along with its software platform. In this way, it is able to offer the hardware for a subscription fee.
The new system is expected to be available in rack systems as well as single chip. It is built on a 7 nanometer process and has a pair of RDU dies. The company hopes to sell a larger multi-rack system in the future. In addition, SambaNova plans to integrate its technology with traditional clusters, so that it can make a deeper impact on programatic performance.
Changes in chip interfaces
The new SambaNova chip architecture features a new RDU die and is optimized for 7 nanometer processes. The company plans to continue upgrading the chip architecture as it scales. It has already raised more than $1 billion in venture capital and plans to continue expanding its product line to other process nodes. In addition to this new chip architecture, SambaNova will make other changes to its chip interfaces.
The Cardinal SN30 RDU, for example, has double the memory bandwidth and a larger compute die, which allows for a 688-TFLOPS processor and more local memory. It also has higher memory bandwidth and a wider range of interfaces. These improvements are expected to drive significant increases in compute and memory performance.
Smaller process nodes
SambaNova Systems recently unveiled its latest silicon at the AI Hardware Summit. The new chip targets foundation models, which are large language models that are able to handle multiple tasks. SambaNova will not change the process node with every new generation, but the company will modify its architecture as it continues to scale.
SambaNova Systems’ DataScale system features a reconfigurable data unit (RDU), or multi-die, platform that is accelerated for data-center processing. Its new Cardinal SN30 RDU supports up to six-times faster performance than previous data-center systems. The SambaNova DataScale system features a Cardinal SN30 RDU with a dual-die architecture. It has twice the memory bandwidth and six times the compute capacity of previous generation systems.
SambaNova, a California-based AI startup, has been raising more than $1 billion in four rounds of venture funding since its founding in 2017. Its investors include Google Ventures, Intel Capital, SoftBank, BlackRock, and Walden International. While it hasn’t disclosed any specific customers, the company’s DataScale platform is already used by Lawrence Livermore National Laboratory, which is integrating its AI capabilities into its simulation and modeling applications.
The company says it plans to expand its AI market through a subscription model aimed at enterprises without deep GPT expertise. This is designed to reduce the complexity of ML and AI deployment, which will help organizations move forward with their projects faster. The subscription model is expected to be introduced in late 2020.