The STMicroelectronics X-LINUX-AI software environment provides tools to perform inference on embedded systems using neural network models. Examples of applications that typically use neural network inference include object/pattern recognition, gesture control, voice processing, and sound monitoring.

X-LINUX-AI includes support for two standard inference engines:

Include X-LINUX-AI packages in Digi Embedded Yocto

Add the meta-st-stm32mpu-ai layer:

$ bitbake-layers add-layer /usr/local/dey-5.0/sources/meta-st-stm32mpu-ai

Edit your conf/local.conf file to include one of the X-LINUX-AI package groups in your Digi Embedded Yocto image:

conf/local.conf
IMAGE_INSTALL:append = " packagegroup-x-linux-ai"

There are a few package groups to choose from:

  • packagegroup-x-linux-ai-tflite includes TensorFlow Lite packages and examples, for use with the ConnectCore MP13’s CPU

  • packagegroup-x-linux-ai-onnxruntime includes ONNX Runtime packages and examples

  • packagegroup-x-linux-ai includes all of the above

Including AI package groups significantly increases the size of the rootfs image. To minimize the increase in image size, select a subset of their packages depending on your needs. You might need to remove additional packages for the rootfs image to fit on your device.

All X-LINUX-AI examples are GUI-based, so they cannot be run on the ConnectCore MP13.

More information

See ST’s X-LINUX-AI OpenSTLinux Expansion Package article for more information on X-LINUX-AI.