Configure your deployment
You can deploy your impulse to any device. This makes the model run without an internet connection, minimizes latency,
and runs with minimal power consumption.
Read more.
Search deployment options
No deployment options available for this project.
Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with--clean
to switch projects)
See the documentation for more information and setup instructions.
Alternatively, you can download your model for below.
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:ede3d1841fee930674d1b13597542067f47f3e9c
Arguments:
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:ede3d1841fee930674d1b13597542067f47f3e9c \
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:4b514519d663112a80fc6fcd6141878b5c7062a5
Arguments:
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:4b514519d663112a80fc6fcd6141878b5c7062a5 \
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:eb37c67d053716ae154bf320c9ed9953ac5cffbe
Arguments:
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:eb37c67d053716ae154bf320c9ed9953ac5cffbe \
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:861f6d4a43191fe1d0caa3ac4095a3e32a14b854
Arguments:
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:861f6d4a43191fe1d0caa3ac4095a3e32a14b854 \
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:d55e31f3d3c4bcc7ac22910faacc08bbb3d76069
Arguments:
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --device /dev/dri \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:d55e31f3d3c4bcc7ac22910faacc08bbb3d76069 \
--api-key ei_613533887bb17ece5682ab54df23d40f8fb58a4f18a6a59ea8e67c9d35b341d0 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Clone this project to deploy this impulse.
Latest build
Fallback Build
Run this model
Click 'Build' to begin