Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with --clean
to switch projects)
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/g7a8t7v6/inference-container:1151a27ce83f3cdd2e06b5a51e851986385cc390
Arguments:
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 --run-http-server 1337 --impulse-id 34
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container:1151a27ce83f3cdd2e06b5a51e851986385cc390 \
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 \
--run-http-server 1337 \
--impulse-id 34
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson:d908264afd943c11925832beafb01425454bfb85
Arguments:
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 --run-http-server 1337 --impulse-id 34
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson:d908264afd943c11925832beafb01425454bfb85 \
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 \
--run-http-server 1337 \
--impulse-id 34
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:e08d8b84da47cdee76b8631a387ceb5028cc6a68
Arguments:
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 --run-http-server 1337 --impulse-id 34
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:e08d8b84da47cdee76b8631a387ceb5028cc6a68 \
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 \
--run-http-server 1337 \
--impulse-id 34
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:60d5955122810105e890d7a3f530cf95cc523e5e
Arguments:
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 --run-http-server 1337 --impulse-id 34
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:60d5955122810105e890d7a3f530cf95cc523e5e \
--api-key ei_8c9380df1783072e192c3efcfb969bb76b895a348ae7296396b6562ecd07ff44 \
--run-http-server 1337 \
--impulse-id 34
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.