Almost there!
One of the DSP blocks has not had its features generated
Model | Author | |
---|---|---|
Akida FOMO (Faster Objects, More Objects) AkidaNet 0.50
|
Edge Impulse | |
YOLOv5(added by Solomon Githu)
|
Edge Impulse Experts | |
YOLOx
|
Edge Impulse Experts | |
YOLOv5 for Renesas DRP-AI
|
Renesas | |
YOLOv5
|
Community blocks | |
YOLOX for TI TDA4VM
|
Texas Instruments | |
NVIDIA TAO RetinaNet
|
Edge Impulse Inc. | |
NVIDIA TAO YOLOV3
|
Edge Impulse Inc. | |
NVIDIA TAO YOLOV4
|
Edge Impulse Inc. | |
NVIDIA TAO SSD
|
Edge Impulse Inc. |
Description | Author | |
---|---|---|
EfficientNet V2B0
Uses around 755K RAM based on your input size, and between 240-1675K ROM depending on the number of layers, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
MobileNetV2 0.35
Uses around 636K RAM based on your input size, and between 69-134K ROM depending on the number of layers, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
MobileNetV2 0.50
Uses around 643K RAM based on your input size, and between 78-168K ROM depending on the number of layers, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
MobileNetV2 0.75
Uses around 1263K RAM based on your input size, and between 109-266K ROM depending on the number of layers, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
MobileNetV2 1.0
Uses around 1273K RAM based on your input size, and between 126-372K ROM depending on the number of layers, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
MobileNetV2 0.1
Uses around 622K RAM based on your input size, and 58K ROM, with default compiler settings. Supports both RGB and grayscale.
|
Edge Impulse |
|
Description | Author | Recommended | |
---|---|---|---|
Akida FOMO (Faster Objects, More Objects) AkidaNet 0.50
An object detection model based on AkidaNet (alpha=0.5, @224x224x3) designed to coarsely segment an image into a grid of background vs objects of interest.More info at: https://doc.brainchipinc.com/user_guide/akida_models.html#akidanet-training
|
Edge Impulse |
|
|
YOLOv5(added by Solomon Githu)
Object Detection using YOLOv5
|
Edge Impulse |
|
|
YOLOx
YOLOx fork for TI TDA4VM
|
Edge Impulse |
|
|
YOLOv5 for Renesas DRP-AI
Transfer learning model using YOLOv5 v5 branch with yolov5s.pt weights. This block is only compatible with Renesas DRP-AI.
|
Edge Impulse |
|
|
YOLOv5
Transfer learning model based on Ultralytics YOLOv5 using yolov5n.pt weights, supports RGB input at any resolution (square images only).
|
Edge Impulse |
|
|
YOLOX for TI TDA4VM
TI's EDGEAI YOLOX. https://github.com/TexasInstruments/edgeai-yolox. Outputs ONNX v7 model format both with and without final detect layers using PyTorch 1.7.1.
See the implementation https://github.com/edgeimpulse/example-custom-ml-block-ti-yolox/tree/onnx-v7
|
Edge Impulse |
|
|
NVIDIA TAO RetinaNet
Object detection model with superior performance on smaller objects. Configurable backbones optimized for targets from MCU to GPU. Supports rectangular input. Image width and height must be multiples of 32. Training requires GPU.
|
Edge Impulse |
|
|
NVIDIA TAO YOLOV3
Object detection model that is fast and accurate. Configurable backbones optimized for targets from MCU to GPU. Supports rectangular input. Image width and height must be multiples of 32. Training requires GPU.
|
Edge Impulse |
|
|
NVIDIA TAO YOLOV4
Object detection model that is fast and accurate. Configurable backbones optimized for targets from MCU to GPU. Supports rectangular input. Image width and height must be multiples of 32. Training requires GPU.
|
Edge Impulse |
|
|
NVIDIA TAO SSD
Object detection model for general purpose use. Configurable backbones optimized for targets from MCU to GPU. Supports rectangular input. Image width and height must be multiples of 32. Training requires GPU.
|
Edge Impulse |
|