Shebin Jose Jacob / AI Cart Public

Shebin Jose Jacob / AI Cart

This is your Edge Impulse project. From here you acquire new training data, design impulses and train models.

Object detection

About this project


AI Cart is a smart shopping cart that eliminates the need to stand in checkout lines; users can check out at their own carts without waiting for anyone. This cart is designed to promote the growing trend of the Just Walk Out cashier-less checkout system, which intends to remove checkouts from supermarkets and provide a personalized shopping experience for everyone using the endless possibilities of AI.


The AI cart uses a combination of object detection algorithms to verify each item placed in the cart. The AI cart's screen shows a real-time receipt of all items in the cart, and when shoppers are ready to check out, simply press the checkout button. A QR code for payment is generated and users can pay the bill by scanning the QR code. No queues, no waiting time, skip the line and checkout easily.


  • AI based system
  • Instant checkout system
  • Contact free checkout
  • Can provide personalised offers

Physical Setup


Work Flow

Let's have a look at the logical flow.


Object detection


Edge Impulse is one of the leading development platforms for machine learning on edge devices, free for developers and trusted by enterprises. Here we are using object detection to build a system that can recognize the products. Then we deploy the system on the Raspberry Pi 4B.

Data acquisition

It is crucial to have a large number of product images while creating the machine learning model. These product images are used to train the model and allow it to differentiate between them. Make sure you have a wide range of perspectives and zoom levels of the items that are offered in the stores. You may take data from any device or development board, or upload your own datasets, for data acquisition. So we're adding our existing datasets here.

Data Acquisition.png

Simply navigate to the Data acquisition tab and select a file to upload. After that, give it a label and upload it to the training area. The Edge Impulse will only accept JPG or PNG image files. Convert it to JPG or PNG format using the internet converters if you have any other formats.

So, using the three separate labels—Popcorn, Lays, and Coke—we uploaded all the data. Therefore, the computer will only recognize these items when you check out. You must submit the dataset of other objects if you wish to recognize any more objects except these. For each object, we have put about 50 pictures here. The more data that neural networks have access to, the better their ability to recognize the object.

Labelling Data

You may view all of your dataset's unlabeled data in the labeling queue. Adding a label to an object is as simple as dragging a box around it. We attempt to automate this procedure by running an object tracking algorithm in the background in order to make life a little easier. If you have the same object in multiple photos we thus can move the boxes for you and you just need to confirm the new box. Drag the boxes, then click Save labels. Continue doing this until your entire dataset has been labeled.


Designing an Impulse

Create Impulse.png

With the training set in place, you can design an impulse. An impulse takes the raw data, adjusts the image size, uses a preprocessing block to manipulate the image, and then uses a learning block to classify new data. Preprocessing blocks always return the same values for the same input (e.g. convert a color image into a grayscale one), while learning blocks learn from past experiences.

For this system, we'll use the 'Images' preprocessing block. This block takes in the color image, optionally makes the image grayscale, and then turns the data into a features array. Then we'll use a 'Transfer Learning' learning block, which takes all the images in and learns to distinguish between the two ('popcorn', 'coke') classes.

In the studio go to Create impulse, set the image width and image height to 320px, the 'resize mode' to Fit the shortest axis, and add the 'Images' and 'Object Detection (Images)' blocks. Then click Save impulse.

Then in the image tab, you can see the raw and processed features of every image. You can use the options to switch between 'RGB' and 'Grayscale' mode, but for now, leave the color depth on 'RGB' and click Save parameters.

This will send you to the Feature generation screen. Here you'll

  • Resize all the data
  • Apply the processing block on all this data.
  • Create a visualization of your complete dataset.
  • Click Generate features to start the process.

Afterward the 'Feature explorer' will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 320x320x3=307200 features) we run a process called 'dimensionality reduction on the dataset before visualizing this. Here the 307200 features are compressed down to just 3 and then clustered based on similarity. Even though we have little data you can already see the clusters forming and can click on the dots to see which image belongs to which dot.

Feature Generation.png

With all data processed it's time to start training a neural network. Neural networks are a set of algorithms, modeled loosely after the human brain, that is designed to recognize patterns. The network that we're training here will take the image data as an input, and try to map this to one of the three classes.

It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.

To configure the transfer learning model, click Object detection in the menu on the left. Here you can select the base model (the one selected by default will work, but you can change this based on your size requirements), and set the rate at which the network learns.

Leave all settings as it is, and click Start training. After the model is done you'll see accuracy numbers below the training output. We have now trained our model.


With the model trained let's try it out on some test data. When collecting the data we split the data up between training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence.

To validate your model, go to Model testing and select Classify all. Here we hit 85.71% accuracy, which is great for a model with so little data.

To see classification in detail, click the three dots next to an item, and select Show classification. This brings you to the Live classification screen with much more details on the file (you can also capture new data directly from your development board from here). This screen can help you determine why items were misclassified.

With the impulse designed, trained, and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library or model file that you can include in your embedded software.


The Raspberry Pi 4B is the powerful development of the extremely successful credit card-sized computer system. The brain of the device is Raspberry Pi. All major processes are carried out by this device.


If you don't know how to set up the Pi just go here. To set this device up in Edge Impulse, run the following commands:

curl -sL | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:

sudo raspi-config

Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompts to enable the camera. Then reboot the Raspberry. With all software set up, connect your camera to Raspberry Pi and run.


This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean. That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.


To run your impulse locally, just connect to your Raspberry Pi again, and run.


This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying.

Here we are using the Linux Python SDK for integrating the model with the system. For working with the Python SDK you need to have a recent version of Python(>=3.7) For installing the SDK for the Raspberry pi, you need to run the following commands.

sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev
pip3 install edge_impulse_linux -i

To classify data, you'll need a model file. We have already trained our model. This model file contains all signal processing code, classical ML algorithms, and neural networks - and typically contains hardware optimizations to run as fast as possible. To download the model file run the below command.

edge-impulse-linux-runner --download modelfile.eim

This downloads the file into ~/modelfile.eim.

Camera Module

This Raspberry Pi Camera Module is a custom-designed add-on for Raspberry Pi. It attaches to Raspberry Pi by way of one of the two small sockets on the board's upper surface. This interface uses the dedicated CSI interface, which was designed especially for interfacing with cameras. The CSI bus is capable of extremely high data rates, and it exclusively carries pixel data.


The board itself is tiny, at around 25mm x 23mm x 8mm. It also weighs just over 3g, making it perfect for mobile or other applications where size and weight are important. It connects to Raspberry Pi by way of a short flexible ribbon cable. The camera connects to the BCM2835 processor on the Pi via the CSI bus, a higher bandwidth link that carries pixel data from the camera back to the processor. This bus travels along the ribbon cable that attaches the camera board to the Pi.

The sensor itself has a native resolution of 5 megapixels and has a fixed focus lens onboard. In terms of still images, the camera is capable of 2592 x 1944 pixel static images, and also supports 1080p30, 720p60 and 640x480p60/90 video.

Power Unit

For making the cart an independent mobile system we are using a power bank to power up the Raspberry Pi.


Checkout Interface

The checkout interface has two parts,

  1. Front-end developed using HTML, JS
  2. Backend API developed using NodeJS and Express

1. Front-end developed using HTML, JS

The front-end continuously checks for the changes happening in the back-end API and displays the changes to the user. Once an item is added to the API, the front-end displays it as an item added to the cart.



2. Backend API developed using NodeJS and Express

The backend REST API is developed using NodeJS and Express. ExpressJS is one of the most popular HTTP server libraries for Node.js, which ships with very basic functionalities. The backend API keeps the details of the products that are visually identified. For setting our interface we have used a small tablet which is having a touch interface with a small stand.

Deploy API on Heroku

In order to deploy API on Heroku, you must have Git and the Heroku CLI installed to deploy with Git.

Once the prerequisites and installed, now you can deploy your app to Heroku.

  • Download the code from the Github Repository.
  • Navigate to the ~/CheckoutUI/server directory
  • Now follow the given steps
    git init
    git add .
    git commit -m "My first commit"
    heroku create -a nameofyourapi
    git push heroku master

This will result in the creation of an api with a URL in the format


To verify your API is working fine,

Integrating API in the Code

In Python Code

  • Replace the "URL" in with your API URL

In JS Code


The entire code and related assets can be found here

Working Demo

Click to Play

What's in the Future?

  • Generate Digital Receipt
  • Keeps Transaction History In Connected Account
  • Personalised Offers And Recommendations

Creating your first impulse (100% complete)

Acquire data

Every Machine Learning project starts with data. You can capture data from a development board or your phone, or import data you already collected.

Design an impulse

Teach the model to interpret previously unseen data, based on historical data. Use this to categorize new data, or to find anomalies in sensor readings.


Package the complete impulse up, from signal processing code to trained model, and deploy it on your device. This ensures that the impulse runs with low latency and without requiring a network connection.

Download block output

Title Type Size
Image training data NPY file 123 windows
Image training labels JSON file 123 windows
Image testing data NPY file 28 windows
Image testing labels JSON file 28 windows
Object detection model TensorFlow Lite (float32) 11 MB
Object detection model TensorFlow Lite (int8 quantized) 4 MB
Object detection model TensorFlow SavedModel 10 MB

Clone project

You are viewing a public Edge Impulse project. Clone this project to add data or make changes.


Data collected
151 items

Project info

Project ID 126708
Project version 2
License No license attached