Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with --clean
to switch projects)
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/g7a8t7v6/inference-container:5e9f2d2f9ec87511e4db4b4239631b107cea52d7
Arguments:
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container:5e9f2d2f9ec87511e4db4b4239631b107cea52d7 \
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.4), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson:66b4bf314fdde7049cfb086927a45cfb3c30fb5f
Arguments:
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson:66b4bf314fdde7049cfb086927a45cfb3c30fb5f \
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d \
--run-http-server 1337
This automatically builds and downloads the latest model with TensorRT support, and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.2), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:d41b526c3973e7f63d2392f17f92197bed2483e3
Arguments:
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:d41b526c3973e7f63d2392f17f92197bed2483e3 \
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d \
--run-http-server 1337
This automatically builds and downloads the latest model with TensorRT support, and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
Container:
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:5b5bb4c0fa532df92e5aa1df75a687cd198a1575
Arguments:
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:5b5bb4c0fa532df92e5aa1df75a687cd198a1575 \
--api-key ei_1517050bdee93f466354b9e976ca3dec9db97c8e771f1fd403415a088836da7d \
--run-http-server 1337
This automatically builds and downloads the latest model with TensorRT support, and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.