DeepMind Uses GNNs to Boost Google Maps ETA Accuracy by up to 50%

Launched 15 years ago, Google Maps is the world’s most popular navigation app by a wide margin, according to German online portal Statista. In a Google Cloud blog post published last September, Google Maps Director of Product Ethan Russell said more than a billion people use Google Maps every month and some five million active apps and websites access Google Maps Platform core products each week.

The ever-industrious DeepMind researchers meanwhile have been working on further improving Google Maps, and this week the UK-based AI company and research lab unveiled a partnership with Google Maps that has leveraged advanced Graph Neural Networks (GNNs) to improve estimated time of arrival (ETA) accuracy.

The coordinated efforts have boosted the accuracy of real-time ETAs by up to 50 percent in cities such as Berlin, Jakarta, São Paulo, Sydney, Tokyo and Washington DC.

ETAs and traffic predictions are useful tools that enable users to efficiently plan departure times, avoid traffic jams, and notify friends and family of unexpected late arrivals. These features are also critical for businesses such as rideshare companies and delivery platforms.

To calculate ETAs, Google Maps analyses global live traffic data for relevant road segments. While this provides an accurate picture of current conditions, it doesn’t account for what a driver may encounter 10, 20, or even 50 minutes into their trip.

To accurately predict future traffic, Google Maps uses machine learning to combine live traffic conditions with historical traffic patterns for roads. This is a complex process due to variations in road quality, speed limits, accidents, construction and road closures, and for example the timing of rush hours in different locations.

While Google Maps’ predictive ETAs have been shown to be accurate for some 97 percent of trips, the DeepMind researchers set out to minimize the remaining inaccuracies. To do this at a global scale, they used GNNs — a generalized machine learning architecture — to conduct spatiotemporal reasoning by incorporating relational learning biases to model the connectivity structure of real-world road networks.

The researchers divided road networks into “Supersegments” consisting of multiple adjacent segments of road that share significant traffic volumes. Their model treats the local road network as a graph, where each route segment corresponds to a node and edges exist between segments that are consecutive on the same road or connected through an intersection. These Supersegments as road subgraphs are sampled at random in proportion to traffic density.

In a GNN, a message-passing algorithm is executed where the messages and their effect on edge and node states are learned by neural networks. A single model can therefore be trained using the sampled subgraphs and deployed at scale.

While the ultimate goal of the new modelling system is to reduce errors in travel estimates, the researchers also found that making use of a linear combination of multiple loss functions (weighted appropriately) greatly increased the model’s generalization ability.

One big challenge the researchers faced was GNNs’ sensitivity to changes in the training curriculum. When training ML systems, the learning rate is often reduced over time, as there is a tradeoff between learning new things and forgetting important features already learned. The researchers developed a novel reinforcement learning technique that enabled their model to learn its own optimal learning rate schedule, producing more stable results and enabling them to deploy it more quickly.

Must Read

error: Content is protected !!