Configure your deployment
You can deploy your impulse to any device. This makes the model run without an internet connection, minimizes latency,
and runs with minimal power consumption.
Read more.
Search deployment options
No deployment options available for this project.
Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with--clean
to switch projects)
See the CLI documentation for more information and setup instructions.
Alternatively, you can download your model for below.
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:2c47193d290bd2fbbc5343de8d9a87b599f60332
Arguments:
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:2c47193d290bd2fbbc5343de8d9a87b599f60332 \
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:44b632fb48202776b1560e000f20b9bf41c658e0
Arguments:
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:44b632fb48202776b1560e000f20b9bf41c658e0 \
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:1106e56ec1415e2fe1916242397652675e91b4f7
Arguments:
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:1106e56ec1415e2fe1916242397652675e91b4f7 \
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:e37642601b1879da13cadda843bacb5aad376697
Arguments:
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:e37642601b1879da13cadda843bacb5aad376697 \
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:4d7979284677b6bdb557abe8948fa1395dc89a63
Arguments:
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e --run-http-server 1337
Ports to expose:
1337
For example, in a one-liner locally:
docker run --rm -it --device /dev/dri \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:4d7979284677b6bdb557abe8948fa1395dc89a63 \
--api-key ei_fc83fc3de3cd43a787327f348cb4c4ccd0b80b59f712ed16e79793fd1fcd6d2e \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Clone this project to deploy this impulse.
Latest build
Fallback Build
Run this model
Click 'Build' to begin