Kinara Edge AI processor tackles Generative AI and transformer-based models

New Ara-2 second-generation processor, built around the same flexible and efficient architecture as Ara-1, boasts tremendously increased performance/Watt and performance/$.

  • 1 year ago Posted in

Kinara has launched the Kinara Ara-2 Edge AI processor, powering edge servers and laptops with high performance, cost effective, and energy efficient inference to run applications such as video analytics, Large Language Models (LLMs), and other Generative AI models. The Ara-2 is also ideal for edge applications running traditional AI models and state-of-the-art AI models with transformer-based architectures. With an experientially enhanced feature set and more than 5-8 times the performance of its first-generation Ara-1 processor, Kinara’s Ara-2 combines real-time responsiveness with high throughput, merging its proven latency optimized design with perfectly balanced on-chip memories and high off-chip bandwidth to execute very large models with extremely low latency.

LLMs and Generative AI in general have become incredibly popular, but most of the associated applications are running on GPUs in data centers and are burdened with high latency, high cost, and questionable privacy. To overcome these limitations and put the compute literally in the hands of the user, Kinara’s Ara-2 simplifies the transition to the edge with its support for the 10’s of billions of parameters used by these Generative AI models. Furthermore, to seamlessly facilitate the migration from expensive GPUs for a wide variety of AI models, the compute engines in Ara-2 and the associated software development kit (SDK), are specifically designed to support high-accuracy quantization, a dynamically moderated host runtime, and direct FP32 support.

“With Ara-2 added to our family of processors, we can better provide customers with performance and cost options to meet their requirements. For example, Ara-1 is the right solution for smart cameras as well as edge AI appliances with 2-8 video streams, whereas Ara-2 is strongly suited for handling 16-32+ video streams fed into edge servers, as well as laptops, and even high-end cameras,” said Ravi Annavajjhala, Kinara’s CEO. “The Ara-2 enables better object detection, recognition, and tracking by using its advanced compute

engines to process higher resolution images more quickly and with significantly higher accuracy. And as an example of its capabilities for processing Generative AI models, Ara-2 can hit roughly 0.5 seconds per iteration for Stable Diffusion and tens of tokens/sec for LLaMA-7B.

In October, Ampere welcomed Kinara into the AI Platform Alliance with the primary goal of reducing system complexity and promoting better collaboration and openness with AI solutions and ultimately delivering better total performance and increased power and cost efficiency than GPUs. “Ampere’s Chief Evangelist Sean Varley said, “The performance and feature set of Kinara’s Ara-2 is a step in the right direction to help us bring better AI alternatives to the industry than the GPU-based status quo.”

The Ara-2 also offers secure boot, encrypted memory access, and a secure host interface to enable enterprise AI deployments with even greater security. Kinara also supports Ara-2 with a comprehensive SDK that includes a model compiler and compute-unit scheduler, flexible quantization options that include the integrated Kinara quantizer as well as support for pre-quantized PyTorch and TFLite models, a load balancer for multi-chip systems, and a dynamically moderated host runtime.

Beacon, NY, Dec 20, 2024– DocuWare unveils its AI-powered Intelligent Document Processing...
85% of IT decision makers surveyed reported progress in their companies’ 2024 AI strategy, with...
Lopitaux joins as global companies embrace GenAI solutions at scale and look to build their own...
Predictive maintenance and forecasting for security and failures will be a growing area for MSPs...
NVIDIA continues to dominate the AI hardware market: powering over 2x the enterprise AI deployments...
Hitachi Vantara survey finds data demands to triple by 2026, highlighting critical role of data...
81% of enterprises plan to increase investments in AI-powered IT operations to accelerate...
Hitachi Vantara survey finds data demands to triple by 2026, highlighting critical role of data...