Given a satellite image, machine learning creates the view on the ground

Geographers could use the technique to determine how land is used.

Leonardo da Vinci famously created drawings and paintings that showed a bird’s eye view of certain areas of Italy with a level of detail that was not otherwise possible until the invention of photography and flying machines. Indeed, many critics have wondered how he could have imagined these details. But now researchers are working on the inverse problem: given a satellite image of Earth’s surface, what does that area look like from the ground? How clear can such an artificial image be?

Today we get an answer thanks to the work of Xueqing Deng and colleagues at the University of California, Merced. These guys have trained a machine-learning algorithm to create ground-level images simply by looking at satellite pictures from above. The technique is based on a form of machine intelligence known as a generative adversarial network. This consists of two neural networks called a generator and a discriminator.

The generator creates images that the discriminator assesses against some learned criteria, such as how closely they resemble giraffes. By using the output from the discriminator, the generator gradually learns to produce images that look like giraffes.

In this case, Deng and co-trained the discriminator using real images of the ground as well as satellite images of that location. So it learns how to associate a ground-level image with its overhead view. Of course, the quality of the data set is important. The team use as ground truth the LCM2015 ground-cover map, which gives the class of land at a one-kilometre resolution for the entire UK. However, the team limits the data to a 71×71-kilometre grid that includes London and surrounding countryside. For each location in this grid, they downloaded a ground-level view from an online database called Geograph.

The team then trained the discriminator with 16,000 pairs of overhead and ground-level images. The next step was to start generating ground-level images. The generator was fed a set of 4,000 satellite images of specific locations and had to create ground-level views for each, using feedback from the discriminator. The team tested the system with 4,000 overhead images and compared them with the ground truth images.

The results make for interesting reading. The network produces images that are plausible given the overhead image, if relatively low in quality. The generated images capture basic qualities of the ground, such as whether it shows a road, whether the land is rural or urban, and so on. “The generated ground-level images looked natural although, as expected, they lacked the details of real images,” said Deng and co.

That’s a neat trick, but how useful is it? One important task for geographers is to classify land according to its use, such as whether it is rural or urban. Ground-level images are essential for this. However, existing databases tend to be sparse, particularly in rural locations, so geographers have to interpolate between the images, a process that is little better than guessing.

Now Deng and co’s generative adversarial networks provide an entirely new way to determine land use. When geographers want to know the ground-level view at any location, they can simply create the view with the neural network based on a satellite image. Deng and co even compare the two methods—interpolation versus image generation. The new technique turns out to correctly determine land use 73 per cent of the time, while the interpolation method is correct in just 65 per cent of cases.

That’s interesting work that could make geographers’ lives easier. But Deng and co have greater ambitions. They hope to improve the image generation process so that in future it will produce even more detail in the ground-level images. Leonardo da Vinci would surely be impressed.

https://arxiv.org/abs/1806.05129 :  What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks

A replacement for traffic lights gets its first test

In-car signals based on vehicle-to-vehicle communication could reduce commuting time by 20 per cent, say, researchers.

The world’s first traffic light system began operating near the Houses of Parliament in London in 1868. It consisted of a set of gas lights operated by a policeman and designed to control the flow of horse-drawn traffic across the Thames.  The trial was a success, at least as far as traffic control was concerned. But the experiment was short-lived. A few months after the lights were installed, they exploded following a gas leak, injuring the policeman who controlled them.   Since then, pedestrians and motorists have enjoyed an uneasy relationship with traffic lights. When they work well, they provide an efficient, neutral system for determining priority on the roads. But when they work badly, the result can be traffic jams for miles around.

So automotive engineers, motorists, and pedestrians alike would dearly love to know whether an alternative is feasible. Today they get an answer of sorts, thanks to the work of Rusheng Zhang at Carnegie Mellon University in Pittsburgh and a few colleagues. These guys have tested a way of ridding our streets of traffic lights entirely and replacing them with a virtual system instead, saying that their system has the potential to dramatically reduce commuting times.

First, some background. The problem that Zhang and co tackle is coordinating the flow of traffic through a junction where two roads meet at right angles. These are often uncontrolled, so motorists have to follow strict rules about when they can pass, such as those that apply at four-way stop signs. This causes delays and jams.

To solve the problem, Zhang and co use the direct short-range radio systems that are increasingly being built into modern vehicles. These act as a vehicle-to-vehicle communication system that shares data such as GPS coordinates, speed, and direction. This data passes to an onboard computer programmed with the team’s virtual traffic light protocol, which issues the driver a green or red light that is displayed in the cabin.

The virtual traffic light system is simple in principle. When two cars approach a junction on different roads, they elect a lead vehicle that controls the junction. The leader is given a red light and then gives the other car priority with a green light. The leader then receives its own green light, and when it moves off, it hands over control to the next vehicle elected leader at the junction.

Zhang and co-tested this approach by setting up a road system in a parking lot in Pittsburgh. This system is based on a standard road layout taken from Open Maps, chosen because of its similarity to the road layout in many US cities. The team then drove two cars around this network in opposite directions, measuring how long it took to navigate 20 junctions using the virtual traffic lights and then again using ordinary four-way stop signs.

The results make for interesting reading. Zhang and co say the virtual system dramatically improves commuting times. “The results show that [virtual traffic lights] reduce the commute time by more than 20% on routes with unsignalized intersections,” they say. And further improvements are possible, yielding up to 30 per cent reductions in commute times. However, the work leaves significant challenges ahead. For example, in many places, traffic signals regulate cars and pedestrians. Zhang and co suggest that pedestrians could be included in the protocol using a smartphone app.

This raises plenty of questions about people who are unable to access apps, such as young, old, and disabled users. These people are among those who benefit most from ordinary traffic lights, so they must be included from the beginning in designing alternatives. Then there is the question of how to include older cars, motorbikes, and bicycles that are not equipped with vehicle-to-vehicle communication systems. It may well be that vehicle-to-vehicle communication will rapidly become standard in new cars. But simpler vehicles are likely to be a feature of our roads for decades to come. How will these new vehicles cope with those that do not use the virtual traffic light system?

And finally, a grid-like road structure is common in American cities, many of which expanded only after the invention of the car. However, grids are much rarer in European and Asian cities, where the road layouts are often unstructured and chaotic. Just how virtual traffic lights might cope it isn’t clear.

Nevertheless, automation is coming. Many cars already have significant levels of driving automation. The next step is obviously coordination where there is likely to be a significant benefit. Virtual traffic lights are likely to be just one part of this trend. At least they should be safe from gas leaks.

Ref: arxiv.org/abs/1807.01633 :  Virtual Traffic Lights: System Design and Implementation

“C” as Part of a Mechanical Engineering Curriculum

Most engineering programs expect undergraduates to take computer programming, but requirements vary widely. My institution, the University of California, Davis, requires electrical engineering students to take four programming courses, but mechanical engineering students take just one course. Which language should students learn? As more mechanical devices add electronic controls, the choice of language becomes more critical.

In 1998 Matlab replaced the more traditional FORTRAN as one of the four required courses for UC Davis electrical engineers and in the mechanical engineering curriculum as well. But after four years, our students’ programming skills had declined compared with those who had taken FORTRAN. In one project, the design of a robot for gathering samples on Mars, only students proficient in the C programming language could program the specified Atmel 8-bit microcontroller. After noting that Matlab alone was insufficient for serious programming, we redesigned our curriculum in 2003 to combine C with an introduction to Matlab.

Why C?

An introductory programming course should use a non-proprietary programming language that adheres to an international standard. A standardized language is stable, and its evolution is supported and maintained by industry and overseen by technical standards committees and other stakeholders. As a language, C continues to evolve but remains backward compatible. As long as it conforms to the C99 standard, a compiler will work with programs written in C89. Matlab, by contrast, is a proprietary mathematical programming language that makes collaboration difficult with individuals not running Matlab. C has arguably become the most common programming language, both in engineering and elsewhere. More than 90 percent of desktop computer programs, from operating systems to word processors, are written in C or its relative, C++. C runs on all platforms, and most other languages may be translated into C. In the Programming Language Popularity Website, C tops the list, while C++ is fourth. FORTRAN is No. 21 and Matlab is nowhere to be seen.

C is especially useful for mechanical engineers because it is the language of choice for hardware interfaces, and commonly used for data acquisition and real-time robotic control. C is also the most widely used language for programming embedded processors: Of the 9 billion microprocessors manufactured in 2005, 8.8 billion were embedded. Despite experiencing a somewhat steep learning curve, students of C gain valuable knowledge of data types, compiling, linking, and optimization, and receive a solid foundation for acquiring advanced programming skills. Once students know C, they can learn other languages more easily, particularly those that borrow heavily from C. Users can either compile or interpret a C program. C interpreters let students execute a single line without compilation, thus providing immediate feedback. Some C interpreters also contain graphical plotting and advanced numerical computing capabilities typically found in mathematical programming languages.

Teaching C in Context

Just as learning foreign languages helps students understand their native tongue, learning C with other languages sheds light on the fundamentals of computer programming. For example FORTRAN, which dates back to the 1950s, remains one of the primary professional programming languages, especially for such computationally intensive programs as computational fluid dynamics. FORTRAN is therefore one of the best candidates for mechanical engineering students to compare with C. C99, ratified in 1999, includes features that enable it to be optimized as efficiently as the equivalent FORTRAN programs. C99 also supports complex numbers and variable length arrays that are useful in engineering and science.
 
An introductory programming course should focus on problem-solving. Our course, which runs for one academic quarter (10 weeks), must cover a lot of ground. Due to time constraints, we teach students both C and Matlab, and provide handouts on FORTRAN as a second programming language. Their solid foundation in C helps our students learn Matlab quickly. We demonstrate the strength and some unique features of Matlab by having students use it to re-solve many of the same problems that they tackled earlier while learning C.

With a solid foundation in C, mechanical engineering students are well prepared for today’s projects, which increasingly integrate mechanical hardware with control software. Students acquire the foundation to learn more advanced, mathematical programming languages, and to take advantage of new and emerging computing paradigms- Adapted from “C for the Course” by Harry H. Cheng, University of California, Davis, for Mechanical Engineering, September 2009

Best Programming Language for Robotics

In this post, we’ll look at the top 10 most popular programming languages used in robotics. We’ll discuss their strengths and weaknesses, as well as reasons for and against using them.

It is actually a very reasonable question. After all, what’s the point of investing a lot of time and effort in learning a new programming language, if it turns out you’re never going to use it? If you are a new roboticist, you want to learn the programming languages which are actually going to be useful for your career.

Why “It Depends” is a Useless Answer

Unfortunately, you will never get a simple answer if you asked “What’s the best programming language for robotics?” to a whole roomful of robotics professionals (or on forums like Stack OverflowQuoraTrossenReddit or Research Gate). Electronic engineers will give a different answer from industrial robotic technicians. Computer vision programmers will give a different answer than cognitive roboticists. And everyone would disagree as to what is “the best programming language”. In the end, the answer which most people would all agree with is “it depends.” This is a pretty useless answer for the new roboticist who is trying to decide which language to learn first. Even if this is the most realistic answer because it does depend on what type of application you want to develop and what system you are using.

Which Programming Language Should I Learn First?

It’s probably better to ask, which programming language is the one you should start learning first? You will still get differing opinions, but a lot of roboticists can agree on the key languages. The most important thing for roboticists is to develop “The Programming Mindset” rather than to be proficient in one specific language. In many ways, it doesn’t really matter which programming language you learn first. Each language that you learn develops your proficiency with the programming mindset and makes it easier to learn any new language whenever it’s required.

Top 10 Popular Programming Languages in Robotics

There are over 1500 programming languages in the world, which is far too many to learn. Here are the ten most popular programming languages in robotics at the moment. If your favorite language isn’t on the list, please tell everyone about it in the comments! Each language has different advantages for robotics. The way I have ordered them is only partly in order of importance from least to most valuable.

10. BASIC / Pascal

BASIC and Pascal were two of the first programming languages that I ever learned. However, that’s not why I’ve included them here. They are the basis for several of the industrial robot languages, described below. BASIC was designed for beginners (it stands for Beginners All-Purpose Symbolic Instruction Code), which makes it a pretty simple language to start with. Pascal was designed to encourage good programming practices and also introduces constructs like pointers, which makes it a good “stepping stone” from BASIC to a more involved language. These days, both languages are a bit outdated to be good for “everyday use”. However, it can be useful to learn them if you’re going to be doing a lot of low level coding or you want to become familiar with other industrial robot languages.

9. Industrial Robot Languages

Almost every robot manufacturer has developed their own proprietary robot programming language, which has been one of the problems in industrial robotics. You can become familiar with several of them by learning Pascal. However, you are still going to have to learn a new language every time you start using a new robot. ABB has its RAPID programming language. Kuka has KRL (Kuka Robot Language). Comau uses PDL2, Yaskawa uses INFORM and Kawasaki uses AS. Then, Fanuc robots use Karel, Stäubli robots use VAL3 and Universal Robots use UR Script. In recent years, programming options like ROS Industrial have started to provide more standardized options for programmers. However, if you are a technician, you are still more likely to have to use the manufacturer’s language.

8. LISP

LISP is the world’s second oldest programming language (FORTRAN is older, but only by one year). It is not as widely used as many of the other programming languages on this list; however, it is still quite important within Artificial Intelligence programming. Parts of ROS are written in LISP, although you don’t need to know it to use ROS.

7. Hardware Description Languages (HDLs)

Hardware Description Languages are basically a programming way of describing electronics. These languages are quite familiar to some roboticists, because they are used to program Field Programmable Gate Arrays (FPGAs). FPGAs allow you to develop electronic hardware without having to actually produce a silicon chip, which makes them a quicker and easier option for some development. If you don’t prototype electronics, you may never use HDLs. Even so, it is important to know that they exist, as they are quite different from other programming languages. For one thing, all operations are carried out in parallel, rather than sequentially as with processor-based languages.

6. Assembly

Assembly allows you to program at “the level of ones and zeros”. This is programming at the lowest level (more or less). In the recent past, most low level electronics required programming in Assembly. With the rise of Arduino and other such microcontrollers, you can now program easily at this level using C/C++, which means that Assembly is probably going to become less necessary for most roboticists.

5. MATLAB

MATLAB, and its open source relatives, such as Octave, is very popular with some robotic engineers for analyzing data and developing control systems. There is also a very popular Robotics Toolbox for MATLAB. I know people who have developed entire robotics systems using MATLAB alone. If you want to analyze data, produce advanced graphs or implement control systems, you will probably want to learn MATLAB.

4. C#/.NET

C# is a proprietary programming language provided by Microsoft. I include C#/.NET here largely because of the Microsoft Robotics Developer Studio, which uses it as its primary language. If you are going to use this system, you’re probably going to have to use C#. However, learning C/C++ first might be a good option for long-term development of your coding skills.

3. Java

As an electronics engineer, I am always surprised that some computer science degrees teach Java to students as their first programming language. Java “hides” the underlying memory functionality from the programmer, which makes it easier to program than, say, C, but also this means that you have less of an understanding of what it’s actually doing with your code. If you come to robotics from a computer science background (and many people do, especially in research) you will probably already have learned Java. Like C# and MATLAB, Java is an interpretive language, which means that it is not compiled into machine code. Rather, the Java Virtual Machine interprets the instructions at runtime. The theory for using Java is that you can use the same code on many different machines, thanks to the Java Virtual Machine. In practice, this doesn’t always work out and can sometimes cause code to run slowly. However, Java is quite popular in some parts of robotics, so you might need it.

2. Python

There has been a huge resurgence of Python in recent years especially in robotics. One of the reasons for this is probably that Python (and C++) are the two main programming languages found in ROS. Like Java, it is an interpretive language. Unlike Java, the prime focus of the language is ease of use. Many people agree that it achieves this very well. Python dispenses with a lot of the usual things which take up time in programming, such as defining and casting variable types. Also, there are a huge number of free libraries for it, which means you don’t have to “reinvent the wheel” when you need to implement some basic functionality. And since it allows simple bindings with C/C++ code, this means that performance heavy parts of the code can be implemented in these languages to avoid performance loss. As more electronics start to support Python “out-of-the-box” (as with Raspberry Pi), we are likely to see a lot more Python in robotics.

1. C/C++

Finally, we reach the Number 1 programming language in robotics! Many people agree that C and C++ are a good starting point for new roboticists. Why? Because a lot of hardware libraries use these languages. They allow interaction with low-level hardware, allow for real-time performance and are very mature programming languages. These days, you’ll probably use C++ more than C, because the language has much more functionality. C++ is basically an extension of C. It can be useful to learn at least a little bit of C first, so that you can recognize it when you find a hardware library written in C. C/C++ are not as simple to use as, say, Python or MATLAB. It can take quite a lot longer to implement the same functionality using C and it will require many more lines of code. However, as robotics is very dependent on real-time performance, C and C++ are probably the closest things that we roboticists have to “a standard language”

Source:- Alex Owen-Hill/blog.robotiq.com

 

Google’s RankBrain Algorithm

RankBrain is an algorithm learning artificial intelligence system, the use of which was confirmed by Google on 26 October 2015. [1]It helps Google to process search results and provide more relevant search results for users.[2]In a 2015 interview, Google commented that RankBrain was the third most important factor in the ranking algorithm along with links and content.[2]As of 2015, “RankBrain was used for less than 15% of queries.” [3]The results show that RankBrain produces results that are well within 10% of the Google search engine engineer team.[4]

If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries or keywords.[5]Search queries are sorted into word vectors, also known as “distributed representations,” which are close to each other in terms of linguistic similarity. RankBrain attempts to map this query into words (entities) or clusters of words that have the best chance of matching it. Therefore, RankBrain attempts to guess what people mean and records the results, which adapts the results to provide better user satisfaction.[6]

There are over 200 different ranking factors[7]which make up the ranking algorithm, whose exact functions in the Google algorithm are not fully disclosed. Behind content and links,[8]RankBrain is considered the third most important signal in determining ranking on Google search.[9][3]Although Google has not admitted to any order of importance, only that RankBrain is one of the three most important of its search ranking signals.[10]When offline, RankBrain is given batches of past searches and learns by matching search results. Studies showed how RankBrain better interpreted the relationships between words. This can include the use of stop words in a search query (“the,” “and,” without,” etc) – words that were historically ignored previously by Google but are sometimes of a major importance to fully understanding the meaning or intent behind a person’s search query. It’s also able to parse patterns between searches that are seemingly unconnected, to understand how those searches are similar to each other.[11]Once RankBrain’s results are verified by Google‘s team the system is updated and goes live again.[12]In August 2013, Google has published a post about how they use AI for learning searcher intention.[13]

Google has stated that it uses tensor processing unit (TPU) ASICs for processing RankBrain requests.[14]

Impact on digital marketing

RankBrain has allowed Google to speed up the algorithmic testing it does for keyword categories to attempt to choose the best content for any particular keyword search. This means that old methods of gaming the rankings with false signals are becoming less and less effective and the highest quality content from a human perspective is being ranked higher in Google [15]

References
  1. https://searchengineland.com/library/google/google-rankbrain
  2. Clark, Jack. “Google Turning Its Lucrative Web Search Over to AI Machines”. Bloomberg Business. Bloomberg. Retrieved 28 October 2015.
  3. “Google uses RankBrain for every search, impacts rankings of “lots” of them”. Search Engine Land. 2016-06-23. Retrieved 2017-04-14.
  4. “Google RankBrain 權威指南 | Whoops SEO”. seo.whoops.com.tw (in Chinese). Retrieved 2018-01-15.
  5. “Google Turning Its Lucrative Web Search Over to AI Machines”. Surgo Group News. Retrieved 5 November 2015.
  6. Capala, Matthew (2016-09-02). “Machine learning just got more human with Google’s RankBrain”. The Next Web. Retrieved 2017-01-19.
  7. “Google’s 200 Ranking Factors: The Complete List”. Backlinko (Brian Dean). 2013-04-18. Retrieved 2016-04-12.
  8. “Rankbrain 2017”. Pay-Website (Edith). 2017-05-12. Retrieved 2017-08-21.
  9. “Now we know: Here are Google’s top 3 search ranking factors”. Search Engine Land. 2016-03-24. Retrieved 2017-04-14.
  10. “Google Releases the Top 3 Ranking Factors | SEJ”. Search Engine Journal. 2016-03-25. Retrieved 2017-04-14.
  11. “The real impact of Google’s RankBrain on search traffic”. The Next Web. Retrieved 2017-05-22.
  12. Sullivan, Danny. “FAQ: All About The New Google RankBrain Algorithm”. Search Engine Land. Retrieved 28 October 2015.
  13. “Google RankBrain 權威指南 | Whoops SEO”. seo.whoops.com.tw (in Chinese). Retrieved 2018-04-26.
  14. “Google’s Tensor Processing Unit could advance Moore’s Law 7 years into the future”. PCWorld. Retrieved 2017-01-19.
  15. “NonTechie RankBrain Guide [Infographic]”. http://www.logicbasedmarketing.com. Retrieved 2018-02-16.