Overcoming the limitations of GPS and having more precise locations makes it easier for riders and drivers to find one another, improves estimated times of arrival (ETAs), reduces rider and driver cancellations, and makes the whole system much more efficient, Uber says.

Researchers have always known that for vehicles to be completely autonomous, they need to accurately predict what other vehicles in their vicinity are going to do. Not an easy task, even for cars driven by humans. Uber researchers are now using artificial intelligence, informed by the vehicle’s detection systems and extensive map data, to significantly improve traffic prediction accuracy.
There’s no room for error
If an autonomous vehicle makes a wrong decision, tragedy can strike. The world was reminded of this in 2018 when a pedestrian died after being hit by an Uber self-driving prototype. She was pushing her bike at night across a poorly lit multi-lane road in Tempe, Arizona. A report into the crash, issued by the National Transportation Safety Board, said the car’s radar and lidar systems registered the pedestrian six seconds before she was hit. However, the software first classified her as an unknown object, then as a vehicle, and finally as a bicycle. Each of these has very different expected movements. It was only 1.3 seconds before the impact that the need for an emergency braking maneuver to avoid the collision was recognized. Unfortunately. the car’s own emergency braking system had been disabled during testing to prevent erratic self-driving responses and the back-up driver failed to see the pedestrian.
Uber researchers make significant advances
To operate safely on public roads, driverless technologies must be able to detect, monitor and predict the path of surrounding vehicles. Uber’s Advanced Technologies Group (ATG) has suggested a way to improve the prediction of how detected surrounding traffic will move. The research is new because it uses something called a generative adversarial network (GAN), instead of less complex methods, to predict traffic movements. The researchers claim this will advance autonomous driving by enabling a 10-fold improvement in the precision of traffic movement predictions.
What is a generative adversarial network (GAN)?
A GAN is a group of machine learning frameworks developed in 2014. Put simply, a generative network creates fictitious outputs in an attempt to get them approved by a discriminator network that’s been ‘trained’ on correct real-world examples. Instead of simply checking real-world examples to see if they fit within a set of pre-defined examples in the discriminator, a GAN creates examples in an attempt to fool the discriminator into thinking they’re real. Hence the adversarial name. During the process, the two systems learn from each other to get better at their roles.
An example could be a GAN creating art works in an attempt to convince the discriminator network they are genuine works from a given artist.
Uber develops a scene-compliant GAN
Uber’s scene-compliant GAN (SC-GAN) created possible ‘other vehicle’ movements that conform to restrictions within a scene or location. To achieve this, the SC-GAN was given access to high-definition maps, which included things like roads, marked pedestrian crossings, signage, traffic lights and lane directions. This was combined with data from on-vehicle tracking and detection systems, such as radar, lidar and camera sensors. The SC-GAN produces a frame of reference for nearby vehicles, with the vehicles at the origin of x and y axes defined by the vehicles’ headings and left sides.
Each vehicle, for which the SC-GAN predicts possible future paths, is given a mathematical matrix version of an image that includes the scene context information and map constraints mentioned above. These images cover 10 metres in front and behind the vehicles and 30 metres either side.
Putting it to the test
Uber’s research team put their AI system to work in TensorFlow, Google’s machine learning platform. They used a large set of real-world data from 240 hours of driving at different times, on different days and in different traffic conditions across several US cities. Each vehicle created a data point every 10th of a second. Each data point contained the current and previous 0.4 seconds recordings for velocities, accelerations, headings and turning rates. The resulting 7.8 million data points and their surrounding HD map information were split into sets for model training, testing and evaluation.
The results
According to the team of researchers, compared to baseline results, SC-SCAN produced a 50% reduction in predicted vehicle paths that were actually off the road or not in a drivable area for each car. They also reported that it outperformed the existing state-of-the-art GAN architectures for motion prediction. It ‘significantly decreased’ the number average and final numbers for prediction errors.
The researchers also observed that the SC-GAN successfully predicted car movements, even in quite challenging borderline cases. For example, when a car was approaching an intersection in a straight-through-only lane, SC-GAN rightly predicted that it would continue straight, even though the car’s tracked heading was slightly tilted to the left. In another scene, SC-GAN correctly anticipated that a car would take a right turn after approaching an intersection in a turning lane.
So what does this mean for autonomous driving?
As the study’s authors pointed out, predicting motion is one to the essential components for faultless self-driving technology. It’s about modelling future behaviour and uncertainty of detected and tracked players in the vicinity of the self-driving vehicle.
They made it clear that their extensive analysis showed the method developed performed better than the best available GAN-based motion prediction of surrounding actors. The important outcome is more realistic and accurate production of possibilities.
Insuring a Driverless Car
Traditionally, auto insurance has followed the driver, regardless of what vehicle they’re driving. When the car is driving itself, this paradigm is completely disrupted. Insuring automated vehicles comes down to a basic question of what it means to drive, and there has yet to be a conclusive answer to this question as far as insurance liability is concerned.