Google Pedestrian Detection System: Reduce the negative for driverless cars

"I will take a look, I don't buy it" is a complete and unconventional statement, which can be translated into "a pointed goods but a thin wallet."

This article refers to the address: http://

For example, Google’s car with a round head and a round head will run, and everyone wants to come to the street and open the sun (matching, pictures, inquiry) to the street. But the various expensive sensors around the body make it impossible to become a travel car for everyone. Although private is not an ideal application for Google cars, the cost is also a threshold for the sharing of the vehicle economy and the popularity of public services.

The technology that drives Google's driverless cars to run freely was originally a private trade secret. But not long ago, at the IEEE International Conference on Robotics and Automation (ICRA) in Seattle, some people were fortunate to know about the latest pedestrian monitoring system that Google is pondering. The good news is that technical means not only enhance the function, but also reduce the cost.

1.jpg

Replace the expensive sensor with a camera

We all know that identifying, tracking, and avoiding pedestrians is a core skill that any company must have to develop a driverless car. Google's driverless cars rely mainly on radar, lidar and cameras to determine road conditions, ensuring that cars can identify pedestrians within a hundred meters. But the battery of the sensor is very expensive, especially the lidar unit that rotates on the roof. The battery costs nearly ten thousand dollars. Multi-unit configuration, the price may have to be lifted.

In comparison, the camera is much cheaper. If the autonomous vehicle can use a camera to locate a passerby, the rapid spread of the driverless car will go further. In this way, the standard required for the vehicle to "see" the road is the video analysis system.

In the past, the best video analysis system used deep neural network technology—the machine can accurately distinguish images and various data after training to complete algorithm learning. Using deep neural network technology, the video analysis process in the processor can be divided into several levels, namely the input layer, the output layer, and multiple processing layers between the two layers.

When the image is recognized, the input layer learns the pixel characteristics of an image. The next layer of processing combines these features by learning, and then through the layer processing of the middle layer, gradually builds more complex associations between pixels and objects. Finally, the output layer guesses what the entire system "sees".

The modern deep neural network recognition accuracy rate is over 99.5%. If you let it play with us, you can outperform the human brain. But video cameras have his shortcomings. Anelia Angelova, a scientist at Google Computer Vision and Machine Learning, said, "Visual information can give the car a broader view than radar data, but the whole process is slower." So traditional deep neural networks The application of technology in pedestrian detection scenarios has been slow.

The main time consuming of the whole process is that the system divides each street view into small pieces of 100,000 or more and analyzes them one by one. In this way, each picture takes several seconds to several minutes. In the urban navigation scenario where the vehicle is required to travel a long distance in a few seconds, the "slow" pedestrian monitoring is no longer useful. In a recent test, a car used such deep neural network technology to identify pedestrians, and as a result, people and props were hit by individuals.

New system pedestrian monitoring "three steps"

Google Pedestrian Detection System: Reduce the negative for driverless cars

The above is a Google Deep Learning System that monitors the effect of pedestrians in different situations. The latest pedestrian detection system relies on camera images to capture pedestrian movements, but optimizes speed issues. The system monitors pedestrians faster and is divided into three steps. Let's take a closer look at the identification process:

The first step is to learn the picture pixel characteristics of deep neural networks. The difference is that a single photo of online learning is only “teared” into dozens, not thousands of old methods. The network is trained to work in multiple threads in different scenarios, picking out portions of the image that it feels to be pedestrians.

The second step is another deep neural network work—purifying the results produced in the first step and further analyzing the screening characterization data.

The third step is similar to the traditional steps, judging whether it is a pedestrian or other obstacle, and finally outputting the result.

It seems that the steps have not been reduced, but because each time the fragment that can be analyzed becomes larger, and after the screening process, it is only necessary to focus on the small image area where there may be pedestrians, so it is faster than the above network learning. Up to 100 times. In the Google Autopilot and Street View collection device, if this system is installed, it takes only one day of training time, and the car can enter the state and accurately recognize pedestrians in a time of about 0.25 seconds.

It is also worth mentioning that the machine judges the front content according to the image, which is a process of comparing existing data with existing data. In the past, Google Autopilot will compare with the pedestrian image that was collected in the video before, and then judge the conclusion. Now, researchers use a pedestrian image database that allows the system to compare the results of previous network learning in the library, which also saves some analysis time.

Autonomous vehicles must be able to determine whether they are humans in the immediate direction, so that they can safely adopt an evasion scheme. Angelova said that although it has not yet reached the ideal standard for real-time response time of 0.07 seconds, this new system has become an effective substitute when other sensors fail.

Car cloud summary:

At the time of the publication of the car cloud, I saw the news that Google acquired sensor company Lumedyne. In the future, this company may provide products for driverless cars instead of people. As processors become more powerful, the ability to learn deeply in-depth networks will become stronger and stronger, and performance is worth looking forward to. When technology is updated and applied quickly, it can bring down costs. The roof rotating lidar may disappear, and you and I can also shake the window of the driverless car and say hello.

Busbar Terminal Block

Busbar Insulator With Brass Screw, U-Shaped Terminal Block,Feed-Through Terminal Block,Pcb Screw Terminal Block

Wonke Electric CO.,Ltd. , https://www.wkdq-electric.com