Configure your deployment
You can deploy your impulse to any device. This makes the model run without an internet connection, minimizes latency,
and runs with minimal power consumption.
Read more.
Search deployment options
No deployment options available for this project.
Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with--clean
to switch projects)
See the CLI documentation for more information and setup instructions.
Alternatively, you can download your model for below.
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:53f01aef9aea14f0350a73bfaf198ceccfe19647
Arguments:
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:53f01aef9aea14f0350a73bfaf198ceccfe19647 \
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:b2e2a2324624ed3ea267327a4b74a2101b2f6e72
Arguments:
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:b2e2a2324624ed3ea267327a4b74a2101b2f6e72 \
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:2e86fb872396178dfe6fc539cf4ffd4ae9cef4b5
Arguments:
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:2e86fb872396178dfe6fc539cf4ffd4ae9cef4b5 \
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:a5bb74ee51d5067410d5a6de139c68d78039a0da
Arguments:
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:a5bb74ee51d5067410d5a6de139c68d78039a0da \
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:5602e0db2ad09ac92a94ee80249a9eb13caac6c7
Arguments:
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --device /dev/dri \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:5602e0db2ad09ac92a94ee80249a9eb13caac6c7 \
--api-key ei_cc877c4e9d0fb58039a7483ab129badb7ed418454ba2290dcb35ae988549ac70 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Clone this project to deploy this impulse.
Latest build
Fallback Build
Run this model
Click 'Build' to begin