Rahul Khanna D / FOMO Resistor, IC, and LED 160x160 Public

Rahul Khanna D / FOMO Resistor, IC, and LED 160x160

Object detection on constrained devices with FOMO!

Object detection

About this project

FOMO Resistor, IC, and LED 160x160

FOMO

Introduction

Have you ever tried opening a circuit board and had no clue what it is? Well, I have got you a solution. With a mobile phone and edge impulse, you can run a trained object detection model to identify the electrical components (Resistor, Capacitors, IC, etc)

Object detection models are vital for many computer vision applications. They can show where an object is in a video stream or allow you to count the number of objects detected. But they’re also very resource-intensive— models like MobileNet SSD can analyze a few frames per second on a Raspberry Pi 4, using a significant amount of RAM. This has put object detection out of reach for the most interesting devices: microcontrollers. Microcontrollers are cheap, small, ubiquitous, and energy-efficient—and are thus attractive for adding computer vision to everyday devices. But microcontrollers are also very resource-constrained, with clock speeds as low as 200 MHz and less than 256 Kbytes of RAM—far too little to run complex object, detection models. But… that has now changed! We have developed FOMO (“faster objects, more objects”), a novel DNN architecture for object detection, designed from the ground up to run on microcontrollers.

Sensor & Block Information

  • Camera module with input images 160 x 160 pixels
  • An image block to normalize the image data, and reduce the color depth to grayscale
  • FOMO transfer learning block-based on MobileNetV2 0.35

Live Classification

We had captured 300 images of resistors, IC, and LED each of size 160x160 using Android mobile. The Labeled images are dataset for Training and Testing. We use 80% - 20% of the data for training and testing the model. We create an Impulse on the Edge impulse platfrom to generate features.

Create impulse

With the generated features, we create an object detection model. We can see the training output once the training is done. The training output explains the confusion matrix, Inferencing time, RAM usage, and Flash usage. The confusing matrix has TP, TN, FP, and FN for the features (LED, Resistor, and IC). From the training output, we can find the F1 score as 96.7% for the quantized (int8) model which is a decent output of an object detection model. The Flash usage is 77.6K which can be deployed to an edge device easily.

Training output

We can loader the edge impulse classifier on the mobile, the inferencing is done on edge as shown below. And now we can identify resistors, led and IC on a mobile without any knowledge on the circuit.

Inference Inference

You can find the detailed explanation and tutorial on my Hackster profile.

Github

Happy to have you subscribed: Youtube

Thanks for reading!

Download block output

Title Type Size
Image training data NPY file 236 windows
Image training labels JSON file 236 windows
Image testing data NPY file 60 windows
Image testing labels JSON file 60 windows
Object detection model (version #2) TensorFlow Lite (float32) 82 KB
Object detection model (version #2) TensorFlow Lite (int8 quantized) 56 KB
Object detection model (version #2) TensorFlow SavedModel 186 KB
Object detection model (version #2) Keras h5 model 88 KB

Clone project

You are viewing a public Edge Impulse project. Clone this project to add data or make changes.

Summary

Data collected
296 items

Project info

Project ID 110947
Project version 2
License No license attached