Deploy to any Linux-based development board
                                    
                                        
                                            Edge Impulse for Linux lets you run your models on any Linux-based development board,
                                            with 
SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
                                            your application.
                                        
                                            
                                                - Install the Edge Impulse Linux CLI
- Run edge-impulse-linux-runner(run with--cleanto switch projects)
 
                                        
                                            See the 
CLI documentation for more information and setup instructions.
                                            Alternatively, you can download your model for 
 below.
                                        
 
                                 
                                
                                    Run your model as a Docker container
                                    
                                        To run your model as a container with an HTTP interface, use:
                                    
                                    
                                        Container:
 public.ecr.aws/g7a8t7v6/inference-container:v1.77.1
                                     
                                    
                                        Arguments:
 --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 --run-http-server 1337
                                     
                                    
                                    
                                        For example, in a one-liner locally:
                                    
                                    
                                        docker run --rm -it \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container:v1.77.1 \
        --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 \
        --run-http-server 1337
                                     
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at 
http://localhost:1337 with instructions.
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
 
                                 
                                
                                    Run your model as a Docker container
                                    
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
                                    
                                    
                                        Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson:v1.77.1
                                     
                                    
                                        Arguments:
 --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 --run-http-server 1337
                                     
                                    
                                    
                                        For example, in a one-liner locally:
                                    
                                    
                                        docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson:v1.77.1 \
        --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 \
        --run-http-server 1337
                                     
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at 
http://localhost:1337 with instructions.
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
 
                                 
                                
                                    Run your model as a Docker container
                                    
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
                                    
                                    
                                        Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:v1.77.1
                                     
                                    
                                        Arguments:
 --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 --run-http-server 1337
                                     
                                    
                                    
                                        For example, in a one-liner locally:
                                    
                                    
                                        docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:v1.77.1 \
        --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 \
        --run-http-server 1337
                                     
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at 
http://localhost:1337 with instructions.
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
 
                                 
                                
                                    Run your model as a Docker container
                                    
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
                                    
                                    
                                        Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:v1.77.1
                                     
                                    
                                        Arguments:
 --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 --run-http-server 1337
                                     
                                    
                                    
                                        For example, in a one-liner locally:
                                    
                                    
                                        docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:v1.77.1 \
        --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 \
        --run-http-server 1337
                                     
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at 
http://localhost:1337 with instructions.
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
 
                                 
                                
                                    Run your model as a Docker container
                                    
                                        To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
                                    
                                    
                                        Container:
 public.ecr.aws/g7a8t7v6/inference-container-qc-adreno-702:v1.77.1
                                     
                                    
                                        Arguments:
 --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 --run-http-server 1337
                                     
                                    
                                    
                                        For example, in a one-liner locally:
                                    
                                    
                                        docker run --rm -it --device /dev/dri \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-qc-adreno-702:v1.77.1 \
        --api-key ei_4bdc8c770ffabf9ee15b6be8728594938d177e78d02466c349584c9f16d17450 \
        --run-http-server 1337
                                     
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at 
http://localhost:1337 with instructions.
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.