Nekhil R / Fire Detection with sensor fusion
This is your Edge Impulse project. From here you acquire new training data, design impulses and train models.
About this project
In order to make the right decision in a fire situation, a fire detection system needs to be accurate and fast. Since most commercial fire detection systems use simple sensors, their fire recognition accuracy is deficient due to the limitations of the sensor's detection capabilities. Existing devices that use rule-based algorithms or image-based machine learning can hardly adapt to changes in the environment because of their static features.
In this project, we will develop a device that can detect fire by means of sensor fusion and machine learning. We will collect data from sensors such as temperature, humidity, and pressure in various fire situations and extract features to build a machine-learning model to detect actual fire events
To make this project a reality, we are using an Arduino Nano 33 BLE sense with Edge Impulse. For data collection, there are two ways: either through the Edge Impulse CLI or through a web browser.
By using the Edge Impulse CLI, follow these steps: tutorial.
Collecting data from the web browser is simple and straightforward. To do this, connect the device to your computer and open the Edge Impulse Studio. Press the connect using webusb button and select your development board. The limitation of using the web serial integration is that it only works with fully developed boards.
The data collection settings is shown below. The usage of temperature, humidity and pressure as environmental sensor is a good choice as they are the parameters that change the most in case of fire events.Also, the sampling rate of 12.5 Hz is appropriate as the parameters are slow-moving.
We have only two classes in this project: No Fire and Fire. For the No Fire case, we collected data at different points in the room. For capturing the Fire data, we built a fire camp-like setup in my backyard. To make our model robust, we collected data at different points in that area. 13 minutes of data are collected for two labels and split it between the testing and the training. The Edge Impulse has a tool called Data Explorer which gives you a one-look overview of your complete dataset. This tool is actually very useful for the object detection and audio projects.
This is our machine learning pipeline known as Impulse.
For the processing block we used Spectral analysis and for the learning block we used Classification . Flatten and Raw Data are the other options existing to add as the processing block. Each proceessing block has it's features and uses, if you need to dive into that, please read the tutorial here. These are our spectral analysis parameters of Filter and Spectral power. We didn't used any filter for the raw data.
The below image shows the generated features for the collected data, the data is well distinguishable. As you can notice in the case of Fire event, there is actually three clusters of data and it shows the parameters is changing at different points.
After sucessfully extracting the features from the DSP block, it's time to train the machine learning model.
These are our Neural Network settings and architecture which works very well for our data.
Please read this guide to know about "how these parameters affects your model". After training,we hit 98% validation accuracy for the data, and that seems to be cool.
The confusion matrix is actually a good tool for evaluating the model, as you can see below 2.1% is missclassified as No Fire .
By checking the feature explorer we can easily identify the sample which is misclassified. It also shows the time at which incorrect classification happened. Here is one example.
This machine learning model is well enough for our project.Let's see how our model performs on unseen data.
Our model performs very well with the unseen data.
The confusion matrix and feature explorer shows our model performs very well.
Now let's test the model with some real-world data. For that we need to move onto the Live Classification tab and connected our arduino using WebUSB.
The above sample recorded the when there is no fire and below sample recorded when there is fire.
Real-world data of No Fire and Fire events are well classified. So our model is ready for deployment
For giving the push notification to the user we used the IFTTT service. Please refer this tutorial to build your own version.
The Hardware unit consist of a Arduino Nano 33 BLE sense,power adpter and ESP-01. The BLE sense is placed in a 3d printed case and ESP-01 on the outside.
The ESP-01 module is utilized to enable WiFi functionality for the Arduino Nano 33 BLE Sense. This module is specifically used to send email alerts via serial communication between the Arduino and ESP-01. In order to establish this communication, we first uploaded the necessary code to both the ESP-01 and Arduino, which can be found in the Github repository. Afterwards, we connected the components according to the provided schematic.
This is the hardware setup that we made for this project.
We deployed our model in Arduino Nano 33 BLE sense as an arduino library.
The entire assets for this project are given in this GitHub repository.
Download block output
|Spectral features training data||NPY file||420 windows|
|Spectral features training labels||NPY file||420 windows|
|Spectral features testing data||NPY file||84 windows|
|Spectral features testing labels||NPY file||84 windows|
|Classifier model||TensorFlow Lite (float32)||5 KB|
|Classifier model||TensorFlow Lite (int8 quantized)||3 KB|
|Classifier model||TensorFlow SavedModel||11 KB|
|Classifier model||Keras h5 model||5 KB|
Data collected13m 50s