How to Fly a Drone With Your Body

For real and simulated drones, piloting with torso movements outperforms a joystick every time—and it’s easier to learn – Megan Scudellari

A model demonstrates a body-machine interface for controlling a simulated drone.

Using only the movements of one’s torso to pilot a drone is more intuitive—and more precise—than a joystick, according to new research from engineers at the École Polytechnique Fédérale De Lausanne (EPFL) in Switzerland.

The technique, tested in virtual reality and with real drones, requires less mental focus from the pilot and frees up their head and limbs. So, for instance, a drone operator at a natural disaster site or on a search and rescue mission could concentrate on looking around and analyzing visual information rather than controlling the flight path of the drone.

The team also found that torso control is easier to learn and more intuitive than a traditional joystick for most people. “It’s not that a joystick does not work—pilots for drone racing do amazing things with their joysticks—but we’ve noticed that for some people, it can be difficult to learn and you have to be really focused while you’re doing it,” says study author Jenifer Miehlbradt, a graduate student at EPFL.

In a series of experiments described this week in the journal PNAS, a team led by Miehlbradt and EPFL neuro-engineer Silvestro Micera set out to come up with an alternative, easier way to pilot a drone.

Infrared markers on a volunteer

First, they stuck over a dozen infrared markers all over the upper body of 17 volunteers and asked them to follow a virtual drone through a simulated landscape in virtual reality. “We asked them to follow the movements of the drone with their body in a way that felt natural to them,” says Miehlbradt. One participant opted to fly the drone-like Superman—with one arm extended above his head—and another chose to “swim” through the air, but everyone else used either their torso alone or their torso and arms to glide like a bird.

Next, in a first-person virtual reality simulation, 39 volunteers were asked to follow a path of clouds as closely as possible. Across the board, torso control was easier to learn and more precise than torso and arms or joystick control. Plus, it actually feels like flying, says Miehlbradt. Finally, it was time to try out the torso technique with real drones. Participants were allowed to train for nine minutes in virtual reality, they were given control of a quadcopter with FPV video feedback and allowed to freely fly for two minutes to get used to its dynamics. “At first, it’s a bit scary,” says Miehlbradt. “It takes a minute to get used to this feeling of ‘I’m over there, with this object that is moving.’ It’s extremely immersive.”

In their final test, volunteers were asked to steer the drone through six gates arranged along a figure-eight trajectory. With the aforementioned minimal training, they did well, steering the quadcopter through the gates without collisions 88 percent of the time. These initial experiments were done with reflective markers on the body and a motion-capture system involving cameras set up around the subject. While a tried-and-true method for motion analysis, such a system is too bulky and expensive for widespread, commercial use.

Now, the second team at EPFL has built the “FlyJacket”—a soft jacket with a motion-sensing device on the back, an arm-support system to prevent fatigue, and VR goggles for simulation. This portable system could eventually be applicable to consumer drones or other types of robots. In the future, the team’s screening method could also be used to identify common, intuitive control patterns for robots of various shapes, says Miehlbradt. Maybe even a flying robot that can transform its shape in mid-air?

And, yes, we know you’re thinking it: This type of body control could—and very likely will—be applied to virtual reality and other types of gaming. During development, the team often set up demonstrations on campus to let people try out flying the drones. The response was unequivocal: “They love it,” says Miehlbradt with a laugh. “It’s something new. It really gives you a feeling of flying…I think it could become more popular than a joystick.”

 

Intelligent Machines A team of AI algorithms just crushed humans in a complex computer game- Will Knight

Algorithms capable of collaboration and teamwork can outmanoeuvre human teams.

Five different AI algorithms have teamed up to kick human butt in Dota 2, a popular strategy computer game.

Researchers at OpenAI, a nonprofit based in California, developed the algorithmic A team, which they call the OpenAI Five. Each algorithm uses a neural network to learn not only how to play the game, but also how to cooperate with its AI teammates. It has started defeating amateur Dota 2 players in testing, OpenAI says.

This is an important and novel direction for AI since algorithms typically operate independently. Approaches that help algorithms cooperate with each other could prove important for commercial uses of the technology. AI algorithms could, for instance, team up to outmanoeuvre opponents in online trading or ad bidding. Collaborative algorithms might also cooperate with humans.

OpenAI previously demonstrated an algorithm capable of competing against top humans at single-player Dota 2. The latest work builds on this using similar algorithms modified to value both individual and team success. The algorithms do not communicate directly except through gameplay.

“What we’ve seen implies that coordination and collaboration can emerge very naturally out of the incentives,” says Greg Brockman, one of the founders of OpenAI, which aims to develop artificial intelligence openly and in a way that benefits humanity. He adds that the team has tried substituting a human player for one of the algorithms and found this to work very well. “He described himself as feeling very well supported,” Brockman says.

Dota 2 is a complex strategy game in which teams of five players compete to control a structure within a sprawling landscape. Players have different strengths, weaknesses, and roles, and the game involves collecting items and planning attacks, as well as engaging in real-time combat.

Pitting AI programs against computer games has become a familiar means of measuring progress. DeepMind, a subsidiary of Alphabet, famously developed a program capable of learning to play the notoriously complex and subtle board game Go with superhuman skill. A related program then taught itself from scratch to master Go and then chess simply by playing against itself.

The strategies required for Dota 2 are more defined than in chess or Go, but the game is still difficult to master. It is also challenging for a machine because it isn’t always possible to see what your opponents are up to and because teamwork is required.

The OpenAI Five learn by playing against various versions of themselves. Over time, the programs developed strategies much like the ones humans use—figuring out ways to acquiring gold by “farming” it, for instance, as well as adopting a particularly strategic role or “lane” within the game.

AI experts say the achievement is significant. “Dota 2 is an extremely complicated game, so even beating strong amateurs is truly impressive,” says Noam Brown, a researcher at Carnegie Mellon University in Pittsburgh. “In particular, dealing with hidden information in a game as large as Dota 2 is a major challenge.”

Brown previously worked on an algorithm capable of playing poker, another imperfect-information game, with superhuman skill (see “Why poker is a big deal in AI”). If the OpenAI Five team can consistently beat humans, Brown says, that would be a major achievement in AI. However, he notes that given enough time, humans might be able to figure out weaknesses in the AI team’s playing style.

Other games could also push AI further, Brown says. “The next major challenge would be games involving communication, like Diplomacy or Settlers of Catan, where balancing between cooperation and competition is vital to success.”

Best Programming Language for Robotics

In this post, we’ll look at the top 10 most popular programming languages used in robotics. We’ll discuss their strengths and weaknesses, as well as reasons for and against using them.

It is actually a very reasonable question. After all, what’s the point of investing a lot of time and effort in learning a new programming language, if it turns out you’re never going to use it? If you are a new roboticist, you want to learn the programming languages which are actually going to be useful for your career.

Why “It Depends” is a Useless Answer

Unfortunately, you will never get a simple answer if you asked “What’s the best programming language for robotics?” to a whole roomful of robotics professionals (or on forums like Stack OverflowQuoraTrossenReddit or Research Gate). Electronic engineers will give a different answer from industrial robotic technicians. Computer vision programmers will give a different answer than cognitive roboticists. And everyone would disagree as to what is “the best programming language”. In the end, the answer which most people would all agree with is “it depends.” This is a pretty useless answer for the new roboticist who is trying to decide which language to learn first. Even if this is the most realistic answer because it does depend on what type of application you want to develop and what system you are using.

Which Programming Language Should I Learn First?

It’s probably better to ask, which programming language is the one you should start learning first? You will still get differing opinions, but a lot of roboticists can agree on the key languages. The most important thing for roboticists is to develop “The Programming Mindset” rather than to be proficient in one specific language. In many ways, it doesn’t really matter which programming language you learn first. Each language that you learn develops your proficiency with the programming mindset and makes it easier to learn any new language whenever it’s required.

Top 10 Popular Programming Languages in Robotics

There are over 1500 programming languages in the world, which is far too many to learn. Here are the ten most popular programming languages in robotics at the moment. If your favorite language isn’t on the list, please tell everyone about it in the comments! Each language has different advantages for robotics. The way I have ordered them is only partly in order of importance from least to most valuable.

10. BASIC / Pascal

BASIC and Pascal were two of the first programming languages that I ever learned. However, that’s not why I’ve included them here. They are the basis for several of the industrial robot languages, described below. BASIC was designed for beginners (it stands for Beginners All-Purpose Symbolic Instruction Code), which makes it a pretty simple language to start with. Pascal was designed to encourage good programming practices and also introduces constructs like pointers, which makes it a good “stepping stone” from BASIC to a more involved language. These days, both languages are a bit outdated to be good for “everyday use”. However, it can be useful to learn them if you’re going to be doing a lot of low level coding or you want to become familiar with other industrial robot languages.

9. Industrial Robot Languages

Almost every robot manufacturer has developed their own proprietary robot programming language, which has been one of the problems in industrial robotics. You can become familiar with several of them by learning Pascal. However, you are still going to have to learn a new language every time you start using a new robot. ABB has its RAPID programming language. Kuka has KRL (Kuka Robot Language). Comau uses PDL2, Yaskawa uses INFORM and Kawasaki uses AS. Then, Fanuc robots use Karel, Stäubli robots use VAL3 and Universal Robots use UR Script. In recent years, programming options like ROS Industrial have started to provide more standardized options for programmers. However, if you are a technician, you are still more likely to have to use the manufacturer’s language.

8. LISP

LISP is the world’s second oldest programming language (FORTRAN is older, but only by one year). It is not as widely used as many of the other programming languages on this list; however, it is still quite important within Artificial Intelligence programming. Parts of ROS are written in LISP, although you don’t need to know it to use ROS.

7. Hardware Description Languages (HDLs)

Hardware Description Languages are basically a programming way of describing electronics. These languages are quite familiar to some roboticists, because they are used to program Field Programmable Gate Arrays (FPGAs). FPGAs allow you to develop electronic hardware without having to actually produce a silicon chip, which makes them a quicker and easier option for some development. If you don’t prototype electronics, you may never use HDLs. Even so, it is important to know that they exist, as they are quite different from other programming languages. For one thing, all operations are carried out in parallel, rather than sequentially as with processor-based languages.

6. Assembly

Assembly allows you to program at “the level of ones and zeros”. This is programming at the lowest level (more or less). In the recent past, most low level electronics required programming in Assembly. With the rise of Arduino and other such microcontrollers, you can now program easily at this level using C/C++, which means that Assembly is probably going to become less necessary for most roboticists.

5. MATLAB

MATLAB, and its open source relatives, such as Octave, is very popular with some robotic engineers for analyzing data and developing control systems. There is also a very popular Robotics Toolbox for MATLAB. I know people who have developed entire robotics systems using MATLAB alone. If you want to analyze data, produce advanced graphs or implement control systems, you will probably want to learn MATLAB.

4. C#/.NET

C# is a proprietary programming language provided by Microsoft. I include C#/.NET here largely because of the Microsoft Robotics Developer Studio, which uses it as its primary language. If you are going to use this system, you’re probably going to have to use C#. However, learning C/C++ first might be a good option for long-term development of your coding skills.

3. Java

As an electronics engineer, I am always surprised that some computer science degrees teach Java to students as their first programming language. Java “hides” the underlying memory functionality from the programmer, which makes it easier to program than, say, C, but also this means that you have less of an understanding of what it’s actually doing with your code. If you come to robotics from a computer science background (and many people do, especially in research) you will probably already have learned Java. Like C# and MATLAB, Java is an interpretive language, which means that it is not compiled into machine code. Rather, the Java Virtual Machine interprets the instructions at runtime. The theory for using Java is that you can use the same code on many different machines, thanks to the Java Virtual Machine. In practice, this doesn’t always work out and can sometimes cause code to run slowly. However, Java is quite popular in some parts of robotics, so you might need it.

2. Python

There has been a huge resurgence of Python in recent years especially in robotics. One of the reasons for this is probably that Python (and C++) are the two main programming languages found in ROS. Like Java, it is an interpretive language. Unlike Java, the prime focus of the language is ease of use. Many people agree that it achieves this very well. Python dispenses with a lot of the usual things which take up time in programming, such as defining and casting variable types. Also, there are a huge number of free libraries for it, which means you don’t have to “reinvent the wheel” when you need to implement some basic functionality. And since it allows simple bindings with C/C++ code, this means that performance heavy parts of the code can be implemented in these languages to avoid performance loss. As more electronics start to support Python “out-of-the-box” (as with Raspberry Pi), we are likely to see a lot more Python in robotics.

1. C/C++

Finally, we reach the Number 1 programming language in robotics! Many people agree that C and C++ are a good starting point for new roboticists. Why? Because a lot of hardware libraries use these languages. They allow interaction with low-level hardware, allow for real-time performance and are very mature programming languages. These days, you’ll probably use C++ more than C, because the language has much more functionality. C++ is basically an extension of C. It can be useful to learn at least a little bit of C first, so that you can recognize it when you find a hardware library written in C. C/C++ are not as simple to use as, say, Python or MATLAB. It can take quite a lot longer to implement the same functionality using C and it will require many more lines of code. However, as robotics is very dependent on real-time performance, C and C++ are probably the closest things that we roboticists have to “a standard language”

Source:- Alex Owen-Hill/blog.robotiq.com

 

Google’s RankBrain Algorithm

RankBrain is an algorithm learning artificial intelligence system, the use of which was confirmed by Google on 26 October 2015. [1]It helps Google to process search results and provide more relevant search results for users.[2]In a 2015 interview, Google commented that RankBrain was the third most important factor in the ranking algorithm along with links and content.[2]As of 2015, “RankBrain was used for less than 15% of queries.” [3]The results show that RankBrain produces results that are well within 10% of the Google search engine engineer team.[4]

If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries or keywords.[5]Search queries are sorted into word vectors, also known as “distributed representations,” which are close to each other in terms of linguistic similarity. RankBrain attempts to map this query into words (entities) or clusters of words that have the best chance of matching it. Therefore, RankBrain attempts to guess what people mean and records the results, which adapts the results to provide better user satisfaction.[6]

There are over 200 different ranking factors[7]which make up the ranking algorithm, whose exact functions in the Google algorithm are not fully disclosed. Behind content and links,[8]RankBrain is considered the third most important signal in determining ranking on Google search.[9][3]Although Google has not admitted to any order of importance, only that RankBrain is one of the three most important of its search ranking signals.[10]When offline, RankBrain is given batches of past searches and learns by matching search results. Studies showed how RankBrain better interpreted the relationships between words. This can include the use of stop words in a search query (“the,” “and,” without,” etc) – words that were historically ignored previously by Google but are sometimes of a major importance to fully understanding the meaning or intent behind a person’s search query. It’s also able to parse patterns between searches that are seemingly unconnected, to understand how those searches are similar to each other.[11]Once RankBrain’s results are verified by Google‘s team the system is updated and goes live again.[12]In August 2013, Google has published a post about how they use AI for learning searcher intention.[13]

Google has stated that it uses tensor processing unit (TPU) ASICs for processing RankBrain requests.[14]

Impact on digital marketing

RankBrain has allowed Google to speed up the algorithmic testing it does for keyword categories to attempt to choose the best content for any particular keyword search. This means that old methods of gaming the rankings with false signals are becoming less and less effective and the highest quality content from a human perspective is being ranked higher in Google [15]

References
  1. https://searchengineland.com/library/google/google-rankbrain
  2. Clark, Jack. “Google Turning Its Lucrative Web Search Over to AI Machines”. Bloomberg Business. Bloomberg. Retrieved 28 October 2015.
  3. “Google uses RankBrain for every search, impacts rankings of “lots” of them”. Search Engine Land. 2016-06-23. Retrieved 2017-04-14.
  4. “Google RankBrain 權威指南 | Whoops SEO”. seo.whoops.com.tw (in Chinese). Retrieved 2018-01-15.
  5. “Google Turning Its Lucrative Web Search Over to AI Machines”. Surgo Group News. Retrieved 5 November 2015.
  6. Capala, Matthew (2016-09-02). “Machine learning just got more human with Google’s RankBrain”. The Next Web. Retrieved 2017-01-19.
  7. “Google’s 200 Ranking Factors: The Complete List”. Backlinko (Brian Dean). 2013-04-18. Retrieved 2016-04-12.
  8. “Rankbrain 2017”. Pay-Website (Edith). 2017-05-12. Retrieved 2017-08-21.
  9. “Now we know: Here are Google’s top 3 search ranking factors”. Search Engine Land. 2016-03-24. Retrieved 2017-04-14.
  10. “Google Releases the Top 3 Ranking Factors | SEJ”. Search Engine Journal. 2016-03-25. Retrieved 2017-04-14.
  11. “The real impact of Google’s RankBrain on search traffic”. The Next Web. Retrieved 2017-05-22.
  12. Sullivan, Danny. “FAQ: All About The New Google RankBrain Algorithm”. Search Engine Land. Retrieved 28 October 2015.
  13. “Google RankBrain 權威指南 | Whoops SEO”. seo.whoops.com.tw (in Chinese). Retrieved 2018-04-26.
  14. “Google’s Tensor Processing Unit could advance Moore’s Law 7 years into the future”. PCWorld. Retrieved 2017-01-19.
  15. “NonTechie RankBrain Guide [Infographic]”. http://www.logicbasedmarketing.com. Retrieved 2018-02-16.

AI in CNC machining

Artificial engineering and artificial intelligence have found their way in almost all industries. When mixed with controlled machining or CNC machining, AI can help remove the manual labor in redundant tasks. Generally, the algorithm of the software can be designed in such a way that after receiving the feedback in a specific situation, the decision can be actualized with or without human consultation. In case of repetitive tasks, requiring no consultation, the software and execute the required steps, eliminating manual labor.

For example, with precision CNC machining, you can design a program to shut off a car if it is left unattended for a few minutes. If you leave a car running in the parking lot, or your garage, an embedded code of AI might leave you a message to alert you that the car is on. If there is no response on the owner’s part (your part), the algorithm will dictate the engine to be shut off after 8 minutes.

The CNC machining algorithm can be used in other industries also, like smartphones, which can help you provide situational alerts, or devices, which will help you, understand the dangers surrounding you.

So, now that you know about the different industries, which can use artificial intelligence, let’s have a look how AI can help you speed up your work. Here are some things artificial intelligence can do better than human beings can.

Search the Internet

Well, most of us have heard about Google’s RankBrain algorithm. Do you know that it has been developed based on artificial intelligence? It is actually a machine learning based artificial intelligence, which handles all the search queries for Google. Since RankBrain understands words and phrases, it can easily predict the top ranking pages, compared to human counterparts. Although it is being tweaked even now, the base algorithm for Google’s RankBrain remains unchanged.

Work In Inhumane Conditions

Well, robots don’t have feelings. This is why they can survive in places without oxygen or where no human beings can survive. This is why artificial intelligence is essential for surveillance in deep oceanic trenches, radioactive locations or even in outer space. The only problem associated with CNC machining in AI is that it waits for human interruption in certain crucial decision making. This makes the process not only time consuming, but also to some extent useless when the AIs need to function independently.

If you own an iPhone or even Windows 10 phone or OS, you’d have come across Siri and Cortana. Both of them are some modifications of artificial intelligence helping you achieve what you need. In fact, if you use Google’s voice assistant in any of your Android devices, you’d see how Google responds to your set of commands, and even opens your task list and checks off tasks as per your command. While this might seem freaky and surreal, with a proper set of coded instructions, any artificial intelligence can achieve the desired result you’re looking for. All it needs is a decision making loop.

Artificial Intelligence in Medical Science

One of the biggest achievements of artificial intelligence is perhaps its progress in the medical field. Even if a physician has enough exposure to patients, proper and accurate diagnosis can be a problem. However, with the artificial intelligence in place, the process can not only becomes smoother but also more accurate.

On average, a physician spends 160 hours per month to keep track of the latest medical breakthroughs. As such, remembering the breakthroughs and the latest symptoms of a patient, and applying them in regular diagnosis can become problematic for human brain. Compared to this, IBM Watson can do the proper diagnosis with a fraction of a second. Additionally, the AI’s accuracy rate for diagnosis of lung cancer is 90%, which is quite high compared to the accuracy rate of veteran human physicians (50%).

At the end of the day, even if AI has found its way in different industries, it cannot replace human intelligence totally because of its lack of general reasoning. While a robot or AI is perfect in doing each of the tasks outline above individually, the same might not be true when it comes to completing a different set of tasks, which it has not been programmed for. This makes it tough for AI to replace human intelligence, unless more research is done on it.