Anagha / Visual Inspection AI Public

Anagha / Visual Inspection AI

This is your Edge Impulse project. From here you acquire new training data, design impulses and train models.

Object detection

About this project

Visual Inspection With Object Detection AI

Cover.jpg

Artificial intelligence (AI) and computer vision have revolutionized the way visual inspection is performed in the automotive and production industries. These technologies have greatly improved the efficiency and accuracy of identifying missing and defective parts, leading to significant cost savings and improved product quality.

One of the main benefits of using AI and computer vision for visual inspection is the ability to perform the task quickly and consistently. Traditional methods of visual inspection often rely on human operators, who can be prone to fatigue, distractions, and errors. AI and computer vision, on the other hand, can work continuously without the need for breaks and are able to analyze large amounts of data in a short period of time. This can significantly reduce the time and cost of performing visual inspection tasks.

Another benefit of using AI and computer vision is their ability to identify subtle defects that may be difficult for a human operator to detect. These technologies are able to analyze images and patterns at a level of detail that is beyond the capabilities of the human eye. This can help to identify defects that may not be visible to the naked eye, but could potentially cause problems down the line.

In addition to identifying defects, AI and computer vision can also be used to track the location and movement of parts on the production line. This can help to identify bottlenecks in the production process and optimize the workflow to increase efficiency.

Our Solution

Our object detection model has the potential to revolutionize the way that defects, rusty parts, and missing components are detected in various industries. By leveraging the power of machine learning, this model is able to quickly and accurately identify problems that may be missed by human inspection. This has the potential to greatly improve the efficiency and effectiveness of various processes, from automotive manufacturing to production lines in a wide range of industries.

In addition to its applications in identifying defects and missing components, this model could also be used for other types of visual inspection tasks. For example, it could be used to perform quality control checks on products in a factory setting, or to identify abnormalities in engine parts.

Hardware Setup

1. Raspberry Pi 4

The Raspberry Pi 4 is a small and powerful computer that has gained popularity in recent years due to its versatility. The Raspberry Pi 4 is powered by a quad-core processor and has a range of connectivity options, including Ethernet, WiFi, and Bluetooth. It also has a number of ports, including HDMI, USB, and a microSD card slot, which can be used to store the operating system and other files.

Raspberry Pi 4.jpg

One specific area where the Raspberry Pi 4 has proven to be particularly useful is in the field of TinyML, or the implementation of machine learning models on small, low-power devices. One of the main benefits of using the Raspberry Pi 4 for TinyML is its ability to handle real-time processing. With its powerful quad-core processor and various connectivity options, the Raspberry Pi 4 is capable of running machine learning models in real-time, allowing for the implementation of tasks such as object recognition or language translation on-device.

2. Camera Module

The 5MP camera module for the Raspberry Pi 4 is a small, lightweight camera that is specifically designed to be used with the popular single-board computer. The camera module is easy to install, with a 15cm ribbon cable that connects to the Raspberry Pi 4's camera port.

Camera Module.jpg

One of the main benefits of the 5MP camera module is its high resolution. With a 5 megapixel image sensor, the camera is capable of capturing high-quality still photos and video. This makes it an excellent choice for projects that require detailed images, such as object recognition or surveillance.

In addition to its high resolution, the 5MP camera module also has a wide field of view, with a lens that can capture a 75.7 degree angle. This allows the camera to capture more of the environment, making it useful for a wide range of applications.

Software Setup

The Raspberry Pi 4 is equipped with a concise and helpful Getting Started Guide that will assist you in establishing Edge Impulse on your device. This guide will provide you with step-by-step instructions on how to properly set up and configure Edge Impulse on your Raspberry Pi 4, ensuring that you are able to utilize the full capabilities of this powerful tool.

Create An Edge Impulse Project

The initial step in constructing your TinyML Model is the creation of a new Edge Impulse Project. To begin this process, you will need to log in to your Edge Impulse account. If you don't have an account yet, do a sign up. Once you have successfully logged in, you will be directed to the Project Creation Screen.

SelectProject.png

In the project creation screen, press on the Create New Project and provide the relevant details to create a new project.

Create Project.png

Depending on the specific needs and goals of your project, you may choose from a variety of data types, including audio, images, and sensor data. In the case of our current project, which is focused on the analysis and interpretation of images, we will need to select Images as our preferred data type.

DataType.png

We are interested in the detection of multiple classes of objects within an image. To accomplish this, we will need to select the Classify Multiple Objects (Object Detection) option from the available data processing choices.

ObjectDetection.png

This will bring you to the project dashboard, where you can begin building and training your TinyML Model.

Connect The Device To Dashboard

To connect Raspberry Pi to the dashboard, run the following command in the terminal

edge-impulse-linux

In the case that you only have one active Edge Impulse project, your device will be automatically assigned to that project. However, if you have multiple active projects, you will need to select the specific project that you wish to attach your device to. To do this, you will need to use the terminal to navigate to the desired project and select it.

Once you have selected the appropriate project, you will need to give your board a recognizable name. After you have chosen a name for your board, you can press enter to complete the process of attaching your device to the project.

Upon completion of the attachment process, your board will be connected to the Edge Impulse project and will appear in the Devices panel.

YourDevices.png

Data Acquisition

Now that we have completed the initial setup of our software and hardware, we can begin constructing our object detection model. To do so, we will first need to gather some data. There are two methods we can use to obtain this data: through direct collection using a connected device or by uploading pre-existing data using the Uploader. For the purposes of this project, we will be utilizing the former method of direct collection using our connected device.

To proceed, we will need to navigate to the Labeling queue and begin drawing bounding boxes around the objects in our dataset. While this process may seem tedious at first, the Edge Impulse platform includes a feature that will automatically label objects it recognizes, significantly increasing the speed at which we can label our data. So, even though it may be a bit of a time-consuming task to start with, it will become much more efficient as we progress.

Labelling Queue.png

Create Impulse

Once you have completed the process of labeling your data, you can move on to the next stage of your project: the creation of an Impulse. To begin this process, you will need to click on the Create Impulse button, which can be found under the Impulse design panel.

Impulse.png

Now you will need to select the appropriate input, processing, and learning blocks for your project. For this particular project, we will need to select Images as the input block, Image as the processing block, and Object detection as the learning block. These selections will ensure that our Impulse is properly configured to handle and process image data, and to perform object detection on that data.

Feature Generation

In the next step of our process navigate to the Image field under the Impulse design panel. After waiting for the necessary parameters to be generated, we will click on the Save parameters button.

ImageTab.png

This will take us to the Generate features tab, where we will click on the Generate features button to continue. This button will initiate the feature generation process, which is an essential step in building our object detection model.

Features.png

Once the feature generation job has been completed, we will be able to view our dataset in the feature explorer tab. This tab offers a particularly useful tool for quickly evaluating the quality of our data, as it allows us to visualize the clustering of our data and determine whether it is well organized.

Model Training

Now that we have designed our impulse, it's time to train our model. The specific settings we used for model training are shown in the image. While it is possible to adjust these settings in an attempt to improve the accuracy of our trained model, it's important to be mindful of the risk of overfitting.

ModelTraining.png

In other words, we want to strike a balance between optimizing model performance and ensuring that it can generalize well to unseen data. With this in mind, you can experiment with different model training settings to find the best configuration for your specific project.

Accuracy - Tranining.png

With enough amount of training data, we were able to achieve an impressive accuracy rate of 97.7% with our chosen model training settings. This high level of accuracy indicates that our model has learned to accurately classify objects within our dataset and is likely to perform well on new, unseen data.

Model Testing

To evaluate the performance of our model on new data, we will now move on to the Model Testing tab and click on the Classify All button. This action will allow us to see how well our model performs when applied to test data, giving us a sense of its generalizability and predictive power. By examining the results of this classification process, we can gain insight into the effectiveness of our model and identify any potential areas for improvement.

Accuracy - Testing.png

Based on the results of the classification process, it appears that our model performs exceptionally well when applied to the test data that was set aside during the data collection phase. This is a strong indication that our model has learned to accurately classify a wide range of objects and is likely to perform well on new, unseen data. Overall, these results suggest that our model is a robust and effective tool for object detection.

Live Classification

We have now reached the final phase of testing for our object detection model. To proceed, we will need to navigate to the Live classification tab and collect a sample from our device. Once we have acquired this sample, we can use our trained model to classify it and observe the results. This final step will give us a sense of how well our model performs in a real-world setting, allowing us to validate the effectiveness of our approach and identify any potential areas for improvement.

Live Classification.png

As the model is performing very well in all the previous testing phases, now let's move on to the deployment

Deployment

In order to run the object detection model on the target device, we will need to issue the following command and select the project that contains the model we want to deploy:

edge-impulse-linux-runner

Once the model has finished downloading, we can access the URL provided in the serial monitor to view a live video feed and observe the model in action within a web browser. This will allow us to see the model's predictions in real-time and confirm that it is functioning as expected.

Deployment.png

The next step in our process is to download a local copy of the object detection model that we have just created using the Edge Impulse platform. To do so, we will need to run the following command:

edge-impulse-linux-runner --download modelfile.eim

This will download the model to our local machine, allowing us to use it for inference without the need for an internet connection. This is particularly useful if we want to deploy the model on a device that will be operating in an environment without internet access.

Now navigate to this URL and download the classify.py file. This file contains the necessary code for running inference on our object detection model using the Edge Impulse Linux SDK for Python. Now add your own modifications to the code and we're all set.

FinalDeployment.jpg

Code

The entire assets including code and model file is given in this github repository.

Download block output

Title Type Size
Image training data NPY file 32 windows
Image training labels JSON file 32 windows
Image testing data NPY file 8 windows
Image testing labels JSON file 8 windows
Object detection model TensorFlow Lite (float32) 83 KB
Object detection model TensorFlow Lite (int8 quantized) 56 KB
Object detection model TensorFlow SavedModel 189 KB
Object detection model Keras h5 model 90 KB

Clone project

You are viewing a public Edge Impulse project. Clone this project to add data or make changes.

Summary

Data collected
40 items

Project info

Project ID 173150
Project version 1
License Apache 2.0