i.MX 8M Plus has an integrated machine learning accelerator that can process neural networks roughly thirty times faster than Arm® processor cores. Two major capabilities are enabling machine learning / AI performance: Neural Processing Unit (NPU) and Image Signal Processor (ISP).
Neural Processing Unit (NPU) – the i.MX 8M Plus is the first i.MX processor with a built-in machine learning accelerator. Using its integrated NPU, the processor can detect complex neural network functions such as human pose and emotion detection, multi-object surveillance and recognition of over 40,000 English words.
NXP’s software development environment for machine learning is eIQ™. It enables the use of ML algorithms on i.MX 8M Plus family SoCs and includes inference engines, neural network compilers, and optimized libraries. Arm NN SDK is an Inference engine for Arm Cortex-A CPU cores, works with Caffe, TensorFlow, TensorFlow Lite, and ONNX models. This inference engine is offered by Arm® free of charge and provides a set of open-source Linux software tools that enables machine learning workloads on power-efficient devices.
TensorFlow Lite, also known as TF lite was developed by Google, enabling machine-learning on embedded devices with low latency at the edge, conducting classification, regression or other tasks without the need for a round trip to a server. TF lite achieves lower latency and smaller binary size by using techniques such as pre-fused activations and quantized kernels that allow smaller and (potentially) faster models. eIQ™ for TensorFlow Lite middleware, delivers the ability to run inferencing on Arm® Cortex®-M. The software development environment for machine learning enables the use of ML algorithms on the i.MX family SoCs.