Configure your deployment
You can deploy your impulse to any device. This makes the model run without an internet connection, minimizes latency,
and runs with minimal power consumption.
Read more.
Search deployment options
No deployment options available for this project.
Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with--clean
to switch projects)
See the documentation for more information and setup instructions.
Alternatively, you can download your model for below.
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:2b7d49c71b49c882ca7d066ea60148d34714e643
Arguments:
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:2b7d49c71b49c882ca7d066ea60148d34714e643 \
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:c997327b412e1b63a37f3e1e42b7165f357f6364
Arguments:
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:c997327b412e1b63a37f3e1e42b7165f357f6364 \
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:e2fa5589f7ac23471f029203b6596b99c4749ff9
Arguments:
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:e2fa5589f7ac23471f029203b6596b99c4749ff9 \
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:83f038bff95cca7fc39d35f592811dc33aa7aa23
Arguments:
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:83f038bff95cca7fc39d35f592811dc33aa7aa23 \
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:f7c0690d8ec21050e3971fe2d852a2bf56197cec
Arguments:
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --device /dev/dri \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:f7c0690d8ec21050e3971fe2d852a2bf56197cec \
--api-key ei_a4ccb59377b1c1d8d36185597f28d2595a04e1d9fb23c3d7ee7fbde7df88453e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Clone this project to deploy this impulse.
Latest build
Fallback Build
Run this model
Click 'Build' to begin