Artificial Intelligence at the Edge – AI where it matters!
June 7, 2019
June 7, 2019
In recent years, the Internet of Things (IOT) – devices and sensors collecting and receiving data, has proliferated into new and vast applications such as autonomous vehicles, video surveillance, logistics, agriculture, consumer electronics, augmented and virtual reality, industrial automation, battlefield technology, – the list goes on.
According to the Alliance of Internet of Things Innovation, by 2021, so-called “the age of the IoT”, there will be 48 billion (!) devices connected to the internet (AIOTI.EU). This expansion caused a shift in the processing requirements associated with it.
Cloud Computing – mega data centers, few and far in between, which enable businesses to process and store their information and applications remotely, fall short from providing an adequate solution, especially where the data is mission critical or requires zero latency.
For these applications, “Edge Computing” provides the answer, by adding computational capabilities at the periphery (edges) of the network in closer proximity to the device being served or as part of it. There is no need to wait anymore for the “smarts” to be generated hundreds or thousands of miles away. Latency at the edge is eliminated.
As IOT is becoming prevalent, Edge Computing is on the rise, replacing the old “cloud” paradigm.
At the first stage, increased computational capabilities are moving outside the data centers and into mid-layer servers or aggregation gateways at the periphery of the network, a process referred to as “Fog Computing”.
In parallel, where the application allows, processing chips are also being integrated into the sensors themselves, which is referred to as full “Edge Computing”. It is acceptable to treat both phenomena under the term Edge Computing.
Artificial Intelligence (AI), or Machine Learning, have long ago left the science fiction films and entered into mainstream corporations, with Forrester reporting that in 2018 over 48% of North American companies have already invested in AI solutions. However, the majority of AI computing today is done in the cloud – mostly in the servers of Google Cloud, Amazon Web Services and Microsoft Azure.
The main problem with the Edge Computing development is that the current CPUs can handle only a certain level of computations. Advanced data processing and especially machine learning – both expected in today’s complex applications, are unattainable at the mere CPU level.
Furthermore, new edge computing products coupled with artificial intelligence algorithms require integration of high capacity processing together with AI accelerators, a time consuming and engineering-intensive feat.
Enablement of AI at the edge offers several advantages:
The following applications can all be developed and trained using TensorFlow, Caffe and PyTorch deep learning frameworks. Although a few of them have a similar algorithm (e.g. boarder surveillance and airport perimeter surveillance), we list them separately as the customers are different and the AI heuristics would divert.
1. Analyzing and processing images and video is already one of the largest beneficiaries from AI accelerated SOM systems. In Home Land Security (HLS), cameras located on poles of a boarder can be paired with gateways programmed to send data back for analysis only when there is a predefined security breach.
2. Airport cameras searching terminals for suspicious human behavior. Whereas the criteria for reaction in the boarder case above are relatively easy – any breach of the perimeter would trigger an event, the behavior analysis required to detect behavior anomaly is subtler and so more complex. Obviously, this application requires robust machine learning capabilities. Once an anomaly is detected on-premise, zero latency is mandated to launch a response.
3. Highways – A system vendor has developed a system for municipalities interested in counting cars in highways in order to reroute traffic and lower congestion. IoT sensors coupled with AI can also assist law enforcers on the lookout for certain license plates for crime investigations.
4. Perimeter security – a telco system vendor who not only installed its telecommunication equipment at its customer’s antenna site, but who also puts cameras on the antenna to scan for equipment theft and break-in attempts into the site.
5. Face recognition – face recognition is another high-learning application in which the machine learns to recognize a unique feature-set. Face recognition already exists in household and building security systems, which require high-computational capacity. With the development of adding artificial intelligence in IoT settings, face recognition can now be adopted for security applications, for instance searching for a fugitive in the streets.
6. Robots recognizing people and objects – robots as a consumer electronic product is another field that is developing rapidly, and it is anticipated that robots will enter into more and more households in the foreseeable future. Adding AI in a robot prototype is doable today, but when there will be hundreds of thousands of them, there is a need to supply each robot with on-location learning capabilities.
7. Cameras in retail stores counting customers, recognizing their walking patterns, where they stop and for how long, etc. This application is also already in use in selected retail stores and will soon be widely adopted. The myriad of data collection and processing will require local machine learning.
8. Intelligent signage – sensors recognizing who is looking at a sign? at what times of the day? for how long? Adding an AI sensor can enable a sign operator/owner to display different advertisements to different customers. Again – a very localized and quick decision maker is required.
9. Most manufacturing industries are highly automated today, however adding AI at the machine level will catapult production one level further. Adding sensors to existing machines or integrating AI for inspection of products will enhance quality assurance. Industrial automation is especially suitable for AI on-site, to ensure that faulty production is corrected at the time of discovery or possibly even anticipated in advance and corrected before the failure occurs.
10. Automobile sensing and immediate reaction is yet another field – probably the largest of all the applications listed above, in which zero latency is a must. Already today there is a lot of machine learning in the soon-widespread autonomous vehicles. But bringing deep learning capabilities to the forefront of action – at the bumper level – will be a first. This is enabled today by coupling AI with the sensors at the edge level.
Application are both within the vehicle and between the car and its surroundings. Within a vehicle, a computerized SOM system offers full connectivity between smart phone apps, car multimedia system, climate control and driver display among other things. From the vehicle outwards there is advanced driver assistance systems, which collect information from the engine and car systems, and the vehicles surrounding and road ahead, offering the driver real-time information including warnings, as such creating a much safer driving experience.
11. It is common knowledge by now that that HLS forces use AI to identify certain speakers and words being said in the telephone network (“bomb” or “explode”, for instance). Using AI for voice authentication and recognition on-location, however, again brakes the boundaries of possible. Alexa and its equivalents still require sending the information long distances, across countries at times, for analysis, even for simple tasks like “Alexa, play Maroon 5”.
If there would be a self-learning SOM inside? – that would lower response time and increase customer satisfaction. Another example would be to add voice recognition to vending machines and ticket machines at train stations, for instance. In this example too, we are taking AI from the data center – to the edge, applying it to the myriad of ticket machines city-wide.
12. Today there are large weather monitoring stations that monitor temperature, carbon-monoxide levels, dust, humidity and more. The idea is to spread out these monitors to many locations, each with powerful local AI capabilities that will enhance the quality of monitoring and prediction of these weather factors.
Edge capabilities and potential have now advanced a significant step further, through a collaboration between SolidRun, a leading developer of embedded systems and network solutions, including low power, low-cost and small-sized SOMs (System-on-Module) and SBCs (Single Board Computer), and Gyrfalcon Technology Inc. (GTI), the world’s leading developer of high-performance AI Accelerators. SolidRun’s new AI accelerated SOMs facilitate quick integration into a larger system and its AI enabled SBCs permit software companies to be free from hardware development altogether – focusing on their core competencies.
SolidRun’s i.MX8M Mini SOM combines all the essential components necessary to quickly prototype powerful AI solutions into a compact 47mm x 30mm module, including processor and memory options, a GPU, Gyrfalcon’s Lightspeeur® 2803S Neural Accelerator chip, optional flash storage, audio and video input and output and more.
The i.MX8M Mini SOMs are based on NXP’s Arm Cortex-A53 single/dual/quad core 1.8Ghz i.MX8M processors with advanced 14LPC FinFET process technology. They enable full 4K UltraHD video quality resolution and HDR (Dolby Vision, HDR10, and HLG). Their processors offer professional audio fidelity with more than 20 audio channels of 384KHz each and SD512 audio capability. Most importantly, all these capabilities are optimized for fanless operation, low thermal system cost and long battery life, which enable their deployment in “edge environments” – regular server rooms that only require customary air conditioning, not “data center” standard cooling and dust control.
The i.MX8M Mini SOM harnesses the power of Gyrfalcon’s Lightspeeur® 2803S Neural Accelerator to help manufacturers quickly and cost effectively create powerful Edge AI applications. These are based on TensorFlow, Caffe and PyTorch deep learning frameworks that benefit from a powerful dedicated AI acceleration processor. The 9 x 9mm accelerator, based on Gyrfalcon’s Matrix Processing Engine architecture, offers multi-dimensional, high-speed neural network processing at very low power.
Additionally, SolidRun offers the HummingBoard Pulse carrier board as an SBC (Single Board Computer). It is perfect for pairing the powerful AI processing capabilities of the iMX8M Mini SOM with a nearly limitless variety of external connectivity and communications features, via its integrated USB-C, Micro USB and USB 3.0 ports, mPCIe and M.2 expansion ports, 10/100/1000 ethernet jack, microSD slot, SIM Card holder, HDMI and DSI 2.0 display output, audio input and output and more.
The i.MX8M Mini SOM and SBC configurations come complete with SolidRun’s comprehensive BSP, which includes GTI’s SDK. The SDK provides a hardware-accelerated, Convolutional Neural Network (CNN) system and a supporting software library implementing state-of-the-art algorithms for the AI accelerator chips. It includes drivers and prebuilt libraries (Linux based), an API for easy software integration, pretrained CNN models and sample use case source code. The BSP further includes documentation about how to train the models in TensorFlow, Caffe and PyTorch deep learning frameworks. Integration could not be more supported, user-friendly and seamless.
In summary, adding machine learning to SOMs enables rapid development of new products of edge computing. Companies can focus on core technology capabilities and develop AI inferences, while leaving the hardware plus AI accelerators as a ready-to-go building block. A small step for technology – a giant step for new, fast and smart applications.