Automotive Objects Detection

$1,000.00

YoloV5 and 10 most common “who” and “what” is on the road
Save your expenses and get the ready-made high-end automotive object detection process and good structured data from road cameras.

With AOD, you can quickly collect data about the 10 most common classes of automotive objects on all types of roads from traffic video streams and records: car, truck, motor, bike, bus, rider, train, pedestrian, traffic sign, traffic light.

 

AOD is a fast, accurate, and powerful DL tool that detects all most common classes of road users. It gives you full awareness of who is there on the road:
● Cars
● Trucks
● Buses
● Riders
● Trains
● Pedestrians
● Motorbikes
● Bicycles
AOD detects traffic code regulatory items such as Traffic signs and Traffic lights. This data allows you to further

AOD is a valuable tool for individuals, businesses, and government entities who digitalize road experience. You can use AOD for:

  1. ADASs
  2. Vehicle surveillance with traffic cameras
  3. Asset tracking
  4. Smart houses with smart parking
  5. Smart city platforms
  6. Traffic video analytics and control
  7. Traffic collisions inspection
  8. Detection of life- threatening road areas
  9. Detection of traffic congestions
  10. Detection of bicyclist stickiness, and more.

In ADASs Advanced Driver-Assistance Systems, you can use AOD for hypnosis validation that is based on computer vision. AOD can be a considerable contribution to ADAS safety-critical software.
Automotive Object Detection can help you cut your total ADAS development expenses, pass audits, and comply with the regulations for your system faster.

Hello World for Linux/MacOS

====================================
YoloV5-based automotive object detection
====================================

How to install

1) download project / library

2) Install library `pip install *.whl` OR `python setup.py install`

How to use

  • CLI
  • Python interface
“`
import cv2
from tnm_automotive_object_detection import Model

image = cv2.imread(‘tests/test.jpg’)
model = Model()

ret = model.process_sample(image)
print(ret)

for r in ret:
image = cv2.rectangle(image, (int(r[‘xmin’]), int(r[‘ymin’])), (int(r[‘xmax’]), int(r[‘ymax’])), (255, 0, 0), 3)
cv2.imshow(‘image’, image)

key = cv2.waitKey(5000) #pauses for 5 seconds before fetching next image

“`
  • Use ONNX model
#### Preprocessing

“`
import cv2
import numpy as np

img = cv2.resize(sample, tuple(self.input_shape[2:]))
img = np.transpose(img, axes=[2, 0, 1])
img = img.astype(‘float32’)
img /= 255.0 # 0 – 255 to 0.0 – 1.0
“`

#### Inference

“`
import onnxruntime

model_path = “<path to model.onnx>”
batch = img[None] # shape [batch_size, channels, height, width]

session = onnxruntime.InferenceSession(str(model_path), None)
input_shape = self.session.get_inputs()[0].shape

pred = session.run([
session.get_outputs()[0].name
], {
session.get_inputs()[0].name: batch
})
“`

#### Postprocessing

<b>Model output format</b>:

<b>shape</b>: `(batch_size=1, anchor_number=25200, features_num)`

`features_num = 4 (xywh) + 1 (object presence probability) + number_of_classes (10) = 15`

Features:

* `Xc` (float, 0..image_width) – X coordinate of bbox center
* `Yc` (float, 0..image_height) – Y coordinate of bbox center
* `W` (float, 0..image_width) – bbox width
* `H` (float, 0..image_height) – bbox height
* `P` (float, 0..1) – object presence probability
* `P[4:…]` (float, 0..1) – object class probability

Features and Quality Criteria

  • Object detection

  • 10 classes (car, pedestrian, bus, train, traffic light…) check`labels.txt` for details

  • CPU/GPU

  • Performance: 59ms per frame (192×320)

  • Accuracy: mAP (@0.5) ~0.5

1 review

5.00 overall rating

1 out of 1 (100%)
customers recommended this product

5
1
4
0
3
0
2
0
1
0

Eye Drive Inc.
Eye Drive Inc.
November 2, 2021

This NN identifies good vehicles and pedestrians

Write a review

Your email address will not be published. Required fields are marked *