The SSD MobileNetV2 model uses a MobileNetV2 backbone with a 256x256 input sizeĪnd SSD feature network. This model is appropriate for useĬases where accuracy is a greater priority to speed and size. This model is generally more accurate than EfficientDet-Lite0,īut is also slower and more memory intensive. EfficientDet-Lite2 is available as an int8, float16, orįloat32 model. The EfficientDet-Lite2 model uses an EfficientNet-Lite2 backbone with a 448x448Ĭontains 1.5 million object instances and 80 object labels. It is both accurate and lightweight enough for many use cases. This model is recommended because it strikes a balance between latencyĪnd accuracy. EfficientDet-Lite0 is available as an int8, float16, orįloat32. The model was trained with the COCOĭataset, a large-scale object detection dataset thatĬontains 1.5 million object instances and 80 object labels. The EfficientDet-Lite0 model uses an EfficientNet-Lite0 backbone with a 320x320 Attention: This MediaPipe Solutions Preview is an early release. The other models presented in this section make If you do not already have a model, start with The Object Detector API requires an object detection model to be downloaded and This option is mutuallyĮxclusive with category_allow_list and using both results in an error. Duplicate or unknown category names are ignored. Non-empty, detection results whose category name is in this set will be filtered Sets the optional list of category names that are not allowed. This option is mutually exclusive with category_deny_list and using If non-empty,ĭetection results whose category name is not in this set will beįiltered out. Sets the optional list of allowed category names. Sets the prediction score threshold that overrides the one provided in Sets the optional maximum number of top-scored detection results to Using the TensorFlow Lite Metadata Writer API You can add localized labels to the metadata of a custom model Metadata of the task's model, if available. Sets the language of labels to use for display names provided in the In this mode, resultListener must beĬalled to set up a listener to receive the detection results LIVE_STREAM: The mode for detecting objects on a live stream of inputĭata, such as from a camera. VIDEO: The mode for detecting objects on the decoded frames of a IMAGE: The mode for detecting objects on single image inputs. This task has the following configuration options: Option Name The Object Detector API outputs the following results for detected objects: The Object Detector API accepts an input of one of the following data types: Label allowlist and denylist - Specify the categories detected.Top-k detection - Filter the number detection results.Score threshold - Filter results based on prediction scores.Label map locale - Set the language used for display names.Normalization, and color space conversion. Input image processing - Processing includes image rotation, resizing,.This section describes the capabilities, inputs, and outputs of this task. Task, including a recommended model, and code example with recommended These platform-specific guides walk you through a basic implementation of this Start using this task by following one of these implementation guides for the Result represents an object that appears within the image or video. Stream as input and outputting a list of detection results. With a machine learning (ML) model, accepting static data or a continuous video For example, an objectĭetector can locate dogs in an image. Multiple classes of objects within images or videos. The MediaPipe Object Detector task lets you detect the presence and location of
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |