1. Overview

This article introduces Variscite’s Python pyvar package and demonstrates how to get started using pyvar’s Machine Learning (ML) API. This API is designed to help beginners to explore the ML field and to develop their own applications by using cameras, displays and user interfaces, targeting the System on Modules (SoMs) powered by the i.MX8 family.

If you are a complete beginner, it is highly recommended you read the following article to get familiar with AI/ML and its examples:

Before jumping into the pyvar package and learning more about its core and examples, we will briefly touch on a few major issues: the AI hardware accelerator, model training and model quantization. The comprehension of these topics should help you understand how ML works on embedded systems, and it will probably allow you to get the best possible inference performance for your ML applications.

Among NXP’s i.MX8 SOC family, the i.MX8M Plus has a dedicated AI accelerator to efficiently handle the ML inference process – this compute engine is called a Neural Processing Unit (NPU). The pyvar package is not strictly tied to the NPU – you can use any other SoM from the i.MX8 family with the ML API; what changes is that the inference process utilizing either the GPU or the CPU instead of the NPU, and this may probably result in a longer inference time.

The NPU itself handles 8-bit fixed-point operations, which results in the ability to have an ML model with simple and small arithmetic units, avoiding larger floating points calculations. To utilize the NPU computational capabilities and achieve the best possible inference performance, 32-bit floating-point network models should be converted to 8-bit fixed point networks.

This conversion is known as quantization, and there are two possible ways to quantize a model to properly work on the NPU. The first one is to train your own model by applying the quantization-aware training (QAT) method during training, and the simpler one is to use a post-training method that only converts a trained model to the format the NPU requires. The conversion process will be discussed in future blog posts.

The pyvar package is licensed under the BSD-3-Clause terms, which means that it is free for use and you can modify it as much as you like, as long as you retain the copyright notice. It is important to mention that this package is still under development and we also encourage you to contribute to the project.

2. Getting Started with Pyvar

2.1 Setting Up the BSP

1. Setup the latest Yocto Release using the fsl-imx-xwayland distro (with Wayland + X11 features) and build the fsl-image-qt5 image which already includes the ML packages, or if you prefer a smaller image, such as fsl-image-gui, follow the instructions below to add the needed ML packages to the build:

a. Add the Machine Learning packages tothe conf file:

OPENCV_PKGS_imxgpu = " \ 
       opencv-apps \ 
       opencv-samples \ 
       python3-opencv \ 
" 
IMAGE_INSTALL_append = " \ 
    packagegroup-imx-ml \ 
    ${OPENCV_PKGS} \ 
" 

2. Boot the board using the built image (e.g., by writing it to an SD card).

2.2 Variscite Python Package Installation

The pyvar package is hosted and available at the PyPi (https://pypi.org) repository, which allows any user to easily install the package by following the next instructions:

1. On the target board, install the pyvar package using thepip3 tool:

# pip3 install pyvar

2. To make sure the package is installed, run the following command to check the version:

# pip3 list | grep pyvar

 

3. PyVar API

The pyvar package implements the most common ML functionalities enabled by the eIQ® ML Software Development Environment from NXP. It also has multimedia and utility classes to help beginners to implement ML applications, and make their code as simple as possible.

See a few pyvar modules examples:

  • pyvar.ml.engines handles inference engines such as TensorFlow Lite and Arm NN:
1 from pyvar.ml.engines.tflite import TFLiteInterpreter
2 engine = TFLiteInterpreter(model_file_path=”path_to_your_model”)

For more information, please visit the pyvar Machine Learning API page.

  • pyvar.multimedia handles multimedia cases such as video files, cameras and devices:
1 from pyvar.multimedia.helper import Multimedia
2 multimedia = Multimedia(source=”path_to_video_file_or_camera_device”)

For more information, please visit the pyvar Multimedia API page.

3.1 Quick Examples Written with PyVar

The examples described in this article use quantized trained models from TensorFlow Lite. These examples were written to help the user understand how to write simple code by exploring multiple use cases with PyVar modules. You can easily download the examples and run them on the target.

The next example uses a starter model from TensorFlow, and runs inference on an image using TensorFlow Lite as an inference engine:

# curl -LJO \
https://github.com/varigit/pyvar/raw/master/examples/ml/detection/image_detection_tflite.py
# python3 image_detection_tflite.py

Other common ML examples written using pyvar:

 

For more information, the full Variscite Python API documentation is available at pyvar.dev.