Deploy to any Linux-based development board
Edge Impulse for Linux lets you run your models on any Linux-based development board,
with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
your application.
- Install the Edge Impulse Linux CLI
- Run
edge-impulse-linux-runner
(run with --clean
to switch projects)
Run your model as a Docker container
To run your model as a container with an HTTP interface, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container:5c5c79383b886a12e76fde6ef41c0c661f18b780
Arguments:
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container:5c5c79383b886a12e76fde6ef41c0c661f18b780 \
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 \
--run-http-server 1337
This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson:a167ece6876eb65c76daa8a2f776d842d332c97b
Arguments:
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson:a167ece6876eb65c76daa8a2f776d842d332c97b \
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 \
--run-http-server 1337
This automatically builds and downloads the latest model with TensorRT support, and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.
Run your model as a Docker container
To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs, use:
Container:
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:0b5e08cc76c9fd6f10ad2e1ce587ff9bb7390aa5
Arguments:
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 --run-http-server 1337
For example, in a one-liner locally:
docker run --rm -it --runtime=nvidia --gpus all \
-p 1337:1337 \
public.ecr.aws/z9b3d4t5/inference-container-jetson-orin:0b5e08cc76c9fd6f10ad2e1ce587ff9bb7390aa5 \
--api-key ei_940a28a3ef5887da7c5fdad593f6cc1c54e1711a2a1e6edebc0e832467bfa100 \
--run-http-server 1337
This automatically builds and downloads the latest model with TensorRT support, and runs an HTTP endpoint at
http://localhost:1337 with instructions.
Read the docs for information,
like bundling in your model inside the container and selecting extra hardware optimizations.