Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with
SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with --clean
to switch projects)
See the
CLI documentation for more information and setup instructions.
Alternatively, you can download your model for
below.
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:8a2db5c5b5c24aced69870722e3c3c26a3784f35
Arguments:
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:8a2db5c5b5c24aced69870722e3c3c26a3784f35 \
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:c63dad7691efea92bb5633ad1ad79b3fa914ad8b
Arguments:
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:c63dad7691efea92bb5633ad1ad79b3fa914ad8b \
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:1cd209de86f9b7201a29cb4d81ebffce3aa996c1
Arguments:
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:1cd209de86f9b7201a29cb4d81ebffce3aa996c1 \
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:72efda913bd4c082dbf8ac7744deeb6c21c74bf3
Arguments:
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin-6-0:72efda913bd4c082dbf8ac7744deeb6c21c74bf3 \
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:a5dded22981614cfb474b4062b2a5a7552d7083f
Arguments:
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --device /dev/dri \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-qc-adreno-702:a5dded22981614cfb474b4062b2a5a7552d7083f \
--api-key ei_fe0d2a6242ff141d8415a379eb84b81372f19201d1d2600213a483dbb615de78 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.