Configure your deployment
                            You can deploy your impulse to any device. This makes the model run without an internet connection, minimizes latency,
                            and runs with minimal power consumption.
                            Read more.
                        
                    
                    Search deployment options
                
            
                                No deployment options available for this project.
                            
                        Deploy to any Linux-based development board
                                            Edge Impulse for Linux lets you run your models on any Linux-based development board,
                                            with SDKs for Node.js, Python, Go and C++ to integrate your models quickly into
                                            your application.
                                        
                                        - Install the Edge Impulse Linux CLI
 - Run 
edge-impulse-linux-runner(run with--cleanto switch projects) 
                                            See the CLI documentation for more information and setup instructions.
                                            Alternatively, you can download your model for  below.
                                        
                                    Run your model as a Docker container
                                        To run your model as a container with an HTTP interface, use:
                                    
                                    Container:
 public.ecr.aws/g7a8t7v6/inference-container:v1.78.0
                                    Arguments:
 --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c --run-http-server 1337
                                    Ports to expose:
 1337
                                    
                                        For example, in a one-liner locally:
                                    
                                    docker run --rm -it \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container:v1.78.0 \
        --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c \
        --run-http-server 1337
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
                                    
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
                                Run your model as a Docker container
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson's GPUs (JetPack 4.6.x), use:
                                    
                                    Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson:v1.78.0
                                    Arguments:
 --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c --run-http-server 1337
                                    Ports to expose:
 1337
                                    
                                        For example, in a one-liner locally:
                                    
                                    docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson:v1.78.0 \
        --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c \
        --run-http-server 1337
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
                                    
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
                                Run your model as a Docker container
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 5.1.x), use:
                                    
                                    Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:v1.78.0
                                    Arguments:
 --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c --run-http-server 1337
                                    Ports to expose:
 1337
                                    
                                        For example, in a one-liner locally:
                                    
                                    docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson-orin:v1.78.0 \
        --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c \
        --run-http-server 1337
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
                                    
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
                                Run your model as a Docker container
                                        To run your model as a container with an HTTP interface on NVIDIA Jetson Orin's GPUs (JetPack 6.0), use:
                                    
                                    Container:
 public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:v1.78.0
                                    Arguments:
 --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c --run-http-server 1337
                                    Ports to expose:
 1337
                                    
                                        For example, in a one-liner locally:
                                    
                                    docker run --rm -it --runtime=nvidia --gpus all \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:v1.78.0 \
        --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c \
        --run-http-server 1337
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
                                    
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
                                Run your model as a Docker container
                                        To run your model as a container with an HTTP interface on Qualcomm Adreno 702 GPUs, use:
                                    
                                    Container:
 public.ecr.aws/g7a8t7v6/inference-container-qc-adreno-702:v1.78.0
                                    Arguments:
 --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c --run-http-server 1337
                                    Ports to expose:
 1337
                                    
                                        For example, in a one-liner locally:
                                    
                                    docker run --rm -it --device /dev/dri \
    -p 1337:1337 \
    public.ecr.aws/g7a8t7v6/inference-container-qc-adreno-702:v1.78.0 \
        --api-key ei_76743206b0ae083822433bc44421723a52d60183574245549703f180f82c678c \
        --run-http-server 1337
                                    
                                        This automatically builds and downloads the latest model (incl. hardware optimizations), and runs an HTTP endpoint at http://localhost:1337 with instructions.
                                    
                                    
                                        Read the docs for information,
                                        like bundling in your model inside the container and selecting extra hardware optimizations.
                                    
                                
                                    
                                    Clone this project to deploy this impulse.
                                
                            Latest build
                            
                                Fallback Build
                            
                        
                    Run this model
Click 'Build' to begin