Nekhil R / Industrial Counter Public

Nekhil R / Industrial Counter

This is your Edge Impulse project. From here you acquire new training data, design impulses and train models.

Object detection

About this project

Adaptable Vision Counters for Smart Industries

IMG_1520.jpg

Automated counting is essential in large industries for accurate packing. Presently, either mechanical or weight-based counting is used in the industries. As well as being time-consuming and hectic, mechanical counting is limited by product size and shape. Weight-based counting assumes that every part has the same weight as the previous one. Counting the products requires taking the average weight of the products and dividing it by the weight of one. There is always some variation in size and shape of the parts, no matter how sophisticated the manufacturing system is. In materials such as wood and rubber, densities can change by up to 50%.

Here comes our adaptable vision based counters which actually uses AI for counting.
This device can easily count both the defective and non-defective parts. Consider if there are more number of defective parts, we can assume that something might be wrong going in the production units.
These data can also be used to improve the quality of production and thus industry can make more products in less time.
So our adaptable counter are evolving as a solution to the world's accurate and flexible counting needs. Adaptable counter is actually a device consisting of Rapsberry pi 4 and camera module and the counting process is fully powered by FOMO. So it can count faster and more accurately than any other method. Adaptable counters are integrated with the website, So any authorised persons in the industry can see live counting output and also has provisons to control them.

Use cases

These cases can be applied to anywhere in the industry.

1. Counting from the top

In this case, we are counting the defective and non-defective washers IMG_1664.jpg

2. Counting in Motion

In this case, we are counting bolts and washers and faulty washers passing through the conveyer belt

IMG_1678.jpg

3. Counting in a Bunch

In this case, we are counting the Bunch Lollipop

IMG_1674.jpg

4. Multiple Parts Counting

In this case, we are counting the multiple parts such as Washers and Bolts

IMG_1676.jpg

Software

Object detection Model Training

Logo.png

Edge Impulse is one of the leading development platforms for machine learning on edge devices, free for developers and trusted by enterprises. For this device we use FOMO, which is a novel machine learning algorithm by Edge Impulse for the object detetction. Then we deploy our machine learning model to the Raspberry Pi 4B for making our work into actionable.

Data acquisition

Every machine learining project starts with data collection. A good collection of data is one of the major factors that influences the performance of the model. Make sure you have a wide range of perspectives and zoom levels of the items that are offered in the Industries. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own dataset, we are uploading them using the Data Acquisition tab.

Data Acquisition.png

Simply navigate to the Data acquisition tab and select a file to upload. After that, give it a label and upload it to the training area. The Edge Impulse will only accept JPG or PNG image files. Convert it to JPG or PNG format using the internet converters if you have any other formats.

In our case we have four labels - Washer, Faulty Washer, Lollipop, Bolt. We have uploaded all the collected data for these four different classes. Therefore, the computer will only recognize these items while counting. You must upload the dataset of other objects if you wish to recognize any more objects except these. The more data that neural networks have access to, the better their ability to recognize the object.

This is our counting setup(Just attached the Adaptable counter on the top of a small wooden plank)

IMG_1526.JPG

Labelling Data

You may view all of your dataset's unlabeled data in the labeling queue. Adding a label to an object is as simple as dragging a box around it. We attempt to automate this procedure by running an object tracking algorithm in the background in order to make life a little easier. If you have the same object in multiple photos we thus can move the boxes for you and you just need to confirm the new box. Drag the boxes, then click Save labels. Continue doing this until your entire dataset has been labeled.

Label.png

Labels.png

Designing an Impulse

Create Impulse.png

It's time to design the impulse.Impulse is actually a machine learning pipeline for generating the features. If you need to know more about impulse just have a look here.

In our Impulse we set the image width and image height to 96px, the resize mode to Fit the shortest axis because it's giving more accurate model.

Then in the image tab, we used Grayscale as the color depth. Then we saved the parameters.

Feature Generation.png

After we generated the features for our data,actually a feature is an individual measurable property. The below figure shows the features generated from our dataset. The generated features itself are well distinguishable with our eyes. feature_explorer window.jpg

It's time to train the machine learning model. For generating the machine learning model from the scratch, requires great time and effort.So we will use a technique called transfer learning which uses a well pre trained model on our data.Then we can create accurate machine learning model with fewer data.

Then head over to the Object detection tab for the model generation.

In this case we are using FOMO algorithm to train the model. So we changed the object detection model to FOMO (Faster Objects, More Objects) MobileNetV2 0.35. FOMO is a novel machine learning algorithm by edge impulse specifically designed for the highly constrained devices. It works very well with the Raspberry pi 4.

These are our neural network settings as shown in the image.We have now trained our model with a training accuracy of 96.7%, pretty good.

Training Accuracy.png

It's time for testing the model in real world. The results are actually surprising.Here we hit 87.5% accuracy, which is great for a model with so little data.

Testing Accuracy.png

Firebase (set-up)

In our project, we use Firebase real-time database to instantly post and retrieve data so that there is no time delay. Here we used Pyrebase library which is a python wrapper for the Firebase.

For installing the pyrebase , run the following command
pip install pyrebase

Pyrebase is written for python 3 and may not work correctly with python 2.

First we created a project in the database

firebasee_projectcreation.jpg

Then head over to the Build section and create a realtime database db_creation.jpg

Then select the test mode, so we can update the data without any authentication

security_roles.jpg

This is our realtime database

rtdb.jpg

For use with only user based authentication we can create the following configuration and that should be added in our python code

import pyrebase
config = {
  "apiKey": "apiKey",
  "authDomain": "projectId.firebaseapp.com",
  "databaseURL": "https://databaseName.firebaseio.com",
  "storageBucket": "projectId.appspot.com"
  }
firebase = pyrebase.initialize_app(config)

Then add the apikey, authDomain and databaseURL(You can find all these in project settings). Then we can store the values in the realtime database.

Website

A webpage is created using HTML, CSS and JS to display the count in realtime. The data updated in the Firebase is reflected in the webpage in realtime. The webpage displays Recent Count when the counting process is halted and displays Current Count whenever the counting process is going on.

Recent.png

Current.png

Code

The entire code and assets are given in the github repository.

Hardware

  • Raspberry Pi 4 B

    IMG_1508_1.jpg The Raspberry Pi4 B is the brain of the system.This Raspberry Pi 4 is integrated with a 64 bit quad core cortex- A72 ARM v8, broadcom BCM2711 and runs at a speed of 1.5GHz. So the counting can be done flawlessly. This tiny computer is fully supported by Edge Impulse. For setting up the Raspberry pi with the Edge Impulse please have a look here.

  • Camera Module

    IMG_1510_1.jpg This Raspberry Pi Camera Module is a custom-designed add-on for Raspberry Pi. It can be easily attached to the Raspberry pi 4 with flex cables. It has a resolution of 5 megapixels and has a fixed focus lens onboard. In terms of still images, the camera is capable of 2592 x 1944 pixel static images, and also supports 1080p30, 720p60 and 640x480p60/90 video.
    This is well enough for our application.

  • Power adapter

    IMG_1524.jpg For powering up the system we used 5V 2A adapter. In this case we don't have any power hungry peripherals,So 2A current would be enough. If you have 3A supply, please go for that.

For the sake of convienence we also used a acrylic case for setting up all the hardware.

Download block output

Title Type Size
Image training data NPY file 109 windows
Image training labels JSON file 109 windows
Image testing data NPY file 32 windows
Image testing labels JSON file 32 windows

Clone project

You are viewing a public Edge Impulse project. Clone this project to add data or make changes.

Summary

Data collected
141 items

Project info

Project ID 126292
Project version 2
License No license attached