Given a satellite image, machine learning creates the view on the ground

Geographers could use the technique to determine how land is used.

Leonardo da Vinci famously created drawings and paintings that showed a bird’s eye view of certain areas of Italy with a level of detail that was not otherwise possible until the invention of photography and flying machines. Indeed, many critics have wondered how he could have imagined these details. But now researchers are working on the inverse problem: given a satellite image of Earth’s surface, what does that area look like from the ground? How clear can such an artificial image be?

Today we get an answer thanks to the work of Xueqing Deng and colleagues at the University of California, Merced. These guys have trained a machine-learning algorithm to create ground-level images simply by looking at satellite pictures from above. The technique is based on a form of machine intelligence known as a generative adversarial network. This consists of two neural networks called a generator and a discriminator.

The generator creates images that the discriminator assesses against some learned criteria, such as how closely they resemble giraffes. By using the output from the discriminator, the generator gradually learns to produce images that look like giraffes.

In this case, Deng and co-trained the discriminator using real images of the ground as well as satellite images of that location. So it learns how to associate a ground-level image with its overhead view. Of course, the quality of the data set is important. The team use as ground truth the LCM2015 ground-cover map, which gives the class of land at a one-kilometre resolution for the entire UK. However, the team limits the data to a 71×71-kilometre grid that includes London and surrounding countryside. For each location in this grid, they downloaded a ground-level view from an online database called Geograph.

The team then trained the discriminator with 16,000 pairs of overhead and ground-level images. The next step was to start generating ground-level images. The generator was fed a set of 4,000 satellite images of specific locations and had to create ground-level views for each, using feedback from the discriminator. The team tested the system with 4,000 overhead images and compared them with the ground truth images.

The results make for interesting reading. The network produces images that are plausible given the overhead image, if relatively low in quality. The generated images capture basic qualities of the ground, such as whether it shows a road, whether the land is rural or urban, and so on. “The generated ground-level images looked natural although, as expected, they lacked the details of real images,” said Deng and co.

That’s a neat trick, but how useful is it? One important task for geographers is to classify land according to its use, such as whether it is rural or urban. Ground-level images are essential for this. However, existing databases tend to be sparse, particularly in rural locations, so geographers have to interpolate between the images, a process that is little better than guessing.

Now Deng and co’s generative adversarial networks provide an entirely new way to determine land use. When geographers want to know the ground-level view at any location, they can simply create the view with the neural network based on a satellite image. Deng and co even compare the two methods—interpolation versus image generation. The new technique turns out to correctly determine land use 73 per cent of the time, while the interpolation method is correct in just 65 per cent of cases.

That’s interesting work that could make geographers’ lives easier. But Deng and co have greater ambitions. They hope to improve the image generation process so that in future it will produce even more detail in the ground-level images. Leonardo da Vinci would surely be impressed.

https://arxiv.org/abs/1806.05129 :  What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks

A replacement for traffic lights gets its first test

In-car signals based on vehicle-to-vehicle communication could reduce commuting time by 20 per cent, say, researchers.

The world’s first traffic light system began operating near the Houses of Parliament in London in 1868. It consisted of a set of gas lights operated by a policeman and designed to control the flow of horse-drawn traffic across the Thames.  The trial was a success, at least as far as traffic control was concerned. But the experiment was short-lived. A few months after the lights were installed, they exploded following a gas leak, injuring the policeman who controlled them.   Since then, pedestrians and motorists have enjoyed an uneasy relationship with traffic lights. When they work well, they provide an efficient, neutral system for determining priority on the roads. But when they work badly, the result can be traffic jams for miles around.

So automotive engineers, motorists, and pedestrians alike would dearly love to know whether an alternative is feasible. Today they get an answer of sorts, thanks to the work of Rusheng Zhang at Carnegie Mellon University in Pittsburgh and a few colleagues. These guys have tested a way of ridding our streets of traffic lights entirely and replacing them with a virtual system instead, saying that their system has the potential to dramatically reduce commuting times.

First, some background. The problem that Zhang and co tackle is coordinating the flow of traffic through a junction where two roads meet at right angles. These are often uncontrolled, so motorists have to follow strict rules about when they can pass, such as those that apply at four-way stop signs. This causes delays and jams.

To solve the problem, Zhang and co use the direct short-range radio systems that are increasingly being built into modern vehicles. These act as a vehicle-to-vehicle communication system that shares data such as GPS coordinates, speed, and direction. This data passes to an onboard computer programmed with the team’s virtual traffic light protocol, which issues the driver a green or red light that is displayed in the cabin.

The virtual traffic light system is simple in principle. When two cars approach a junction on different roads, they elect a lead vehicle that controls the junction. The leader is given a red light and then gives the other car priority with a green light. The leader then receives its own green light, and when it moves off, it hands over control to the next vehicle elected leader at the junction.

Zhang and co-tested this approach by setting up a road system in a parking lot in Pittsburgh. This system is based on a standard road layout taken from Open Maps, chosen because of its similarity to the road layout in many US cities. The team then drove two cars around this network in opposite directions, measuring how long it took to navigate 20 junctions using the virtual traffic lights and then again using ordinary four-way stop signs.

The results make for interesting reading. Zhang and co say the virtual system dramatically improves commuting times. “The results show that [virtual traffic lights] reduce the commute time by more than 20% on routes with unsignalized intersections,” they say. And further improvements are possible, yielding up to 30 per cent reductions in commute times. However, the work leaves significant challenges ahead. For example, in many places, traffic signals regulate cars and pedestrians. Zhang and co suggest that pedestrians could be included in the protocol using a smartphone app.

This raises plenty of questions about people who are unable to access apps, such as young, old, and disabled users. These people are among those who benefit most from ordinary traffic lights, so they must be included from the beginning in designing alternatives. Then there is the question of how to include older cars, motorbikes, and bicycles that are not equipped with vehicle-to-vehicle communication systems. It may well be that vehicle-to-vehicle communication will rapidly become standard in new cars. But simpler vehicles are likely to be a feature of our roads for decades to come. How will these new vehicles cope with those that do not use the virtual traffic light system?

And finally, a grid-like road structure is common in American cities, many of which expanded only after the invention of the car. However, grids are much rarer in European and Asian cities, where the road layouts are often unstructured and chaotic. Just how virtual traffic lights might cope it isn’t clear.

Nevertheless, automation is coming. Many cars already have significant levels of driving automation. The next step is obviously coordination where there is likely to be a significant benefit. Virtual traffic lights are likely to be just one part of this trend. At least they should be safe from gas leaks.

Ref: arxiv.org/abs/1807.01633 :  Virtual Traffic Lights: System Design and Implementation