# Machinelearning Demo ## Introduction The Machinelearning Demo is a demo application that we have built to test and demonstrate the object detection capabilities of our i.MX8 based displays. The object detection parts is based on the Tensorflow Recipe, which you can find the source code for in that chapter. The demo is a pre-built binary that you can install on the display if you quickly want to try out the object detection concepts. Please contact us to request a download for the demo installer. ## Features **The main features are:** - Instant switching between object, pose, face, classification and custom Tensorflow Lite models. - Instant switching between two input medias (ethernet camera or video clip) - Interactive confidence level filter - Debug display of DPS, input image dimensions and wanted image dimensions - "Average" filter (experimental) ### Overview ![Machinelearning Demo](images/img_machinelearningdemo_cars_texts_50.png)
The different parts of the Machinelearning Demo. When the confidence level selector is displayed, also the confidence level of each visible object is shown above them. ### Models ![Detection Types](images/img_machinelearningdemo_detectiontypes_50.png)
The model to use can be selected in the *Models* menu. The options are Object, Pose, Face and Pause (None). If you select the classification model in the application settings, it will be activated with the Face option in the menu. ⚠️ The model that can be selected in the application settings must have the same output types as the other object detection models, i.e. returning a vector with squares and label indexes. You can read more about this on the Tensorflow Recipe page. ### Object & Pose Detection ![Object Detection](images/img_machinelearningdemo_objectdetection_25.png) ![Pose Detection](images/img_machinelearningdemo_posedetection_25.png)
Object detection and pose detection are the two main detection models. ### Media Input ![Inputs](images/img_machinelearningdemo_videoinput_50.png)
You can switch between two different media inputs in the *Media Input* menu. Per default, camera is selected with the first option and video clip is selected with the second option. You can change this assignment in the application settings, so two cameras or two video clips can be used alternately. ### Basic Settings ![Basic Settings](images/img_machinelearningdemo_basicsettings_50.png)
In the *Basic* settings, the confidence level selector can be enabled. You can also switch off the *Average filter* which is a filter that attempts to make the display of the rectangles smoother. ### Model Settings ![Model Settings](images/img_machinelearningdemo_modelssettings_50.png)
The *Model* settings page contains a list of selectable models. You can switch between the models, and the selected one will be available in the *Face* model option in the *Models* menu. You can add your own models to the *tflite* models folder if you want to test it in the Machinelearning Demo. ### Input settings ![Input Settings](images/img_machinelearningdemo_mediasettings_50.png)
Different cameras may broadcast the video stream on different ports. In the *Media* settings you can select the correct port, or switch between different medias that can be selected in the *Media Input* menu. You can add your own video clips in the *movies* folder. ### File Structure ![Files and Folders](images/machinelearningdemo_folders.png)
The settings file for the Machinelearning Demo is located in the */opt/demo-files* folder. You can put your own model(s) in the *tflite* folder and video clip(s) in the *movies* folder. These files will be available and selectable in the application settings the next time you start the Machinelearning Demo.