SolidRun and Gyrfalcon Develop First Edge Optimized AI Inference Server That Bests GPU Performance at a Fraction of the Cost and Power
Featuring Lightspeeur® 2803S Neural Accelerators, SolidRun’s Janux GS31 Inference Server Supports Extremely Low Latency Decode and Video AI Inference on up to 128 channels of 1080p Video
Feb. 26th, 2020 – SolidRun, a leading developer and manufacturer of high-performance edge computing solutions, and ASIC solutions provider Gyrfalcon Technology Inc., today introduced a co-developed Arm®-based AI inference server optimized for the edge. Highly scalable and modular, Janux GS31 supports today’s leading neural network frameworks and can be configured with up to 128 Gyrfalcon Lightspeeur® SPR2803 AI acceleration chips for unrivaled inference performance for today’s most complex video AI models.
Tailor made to meet the future challenges of mass deployment of artificial intelligence applications at the edge, including energy consumption, cost effectiveness and server real estate, this powerful server foundation allows for accelerated and cost-effective scaling of AI inference. Supporting ultra-low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video, Janux GS31 is well suited for monitoring smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications, tagging photos and videos for text-based searching and more.
Featuring best-in-class application and energy efficiency, made possible by Gyrfalcon’s Lightspeeur® 2803S Neural Accelerator chips, that deliver up to 24 TOPS per Watt, SolidRun’s powerful Edge AI Inference Server outperforms SoC and GPU based systems by orders of magnitude, while using a fraction of the energy required by systems with equivalent computational power. While lower energy consumption will help the Janux GS31 server deliver long-term cost savings, it also requires less of an upfront investment than competing inference servers.
“Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively,” said Dr. Atai Ziv, CEO at SolidRun. “While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability.”
“SolidRun’s Janux GS31 inference server is a perfect implementation of GTI’s AI accelerator technology and the Lightspeeur 2803S,” said Bin Lei, senior vice president of sales and marketing. “The design and implementation of this server supports extremely high-performance inference with low energy use for high capacity live HD streaming video encoding and decoding to address demand in surveillance, broadcasting and a wide range of service provider market segments.”
Jim McGregor, Founder and Principal Analyst, Tirias Research commented, “AI is rapidly moving to the edge of the network to address the performance and security needs of many applications. As a result, new networks will drive increasing demand for processing performance and efficiency. The SolidRun platform, leveraging the GTI AI acceleration technology, will provide a powerful and efficient way to build a new intelligent network bridging the gap between devices and the cloud.”