Your fingerprints are all over the screens you design- Fabricio Teixeira

Photo by Daniel McCullough

Technology is not neutral. It is not impartial.

Who we reflect on what we build. When you navigate a physical space, you are following the plan of the architects and urbanists who designed it in the first place. Of course how you navigate the space is unique to youlike the way you walk or the order you decide to visit each area — but your overall path follows a larger plan envisioned by the creators of that space.

Same applies to digital channels.

Sure, as a user, you have your specific digital footprint, your specific behaviour.

A lot of designers seem to be talking about user experience (UX) these days. We’re supposed to delight our users, even provide them with magic, so that they love our websites, apps and start-ups. User experience is a very blurry concept. Consequently, many people use the term incorrectly. Furthermore, many designers seem to have a firm (and often unrealistic) belief in how they can craft the user experience of their product. However, UX depends not only on how something is designed but also other aspects. In this article, I will try to clarify why UX cannot be designed.

Heterogeneous Interpretations of UX

I recently visited the elegant website of a design agency. The website looked great, and the agency has been showcased several times. I am sure it delivers high-quality products. But when it presents its UX work, the agency talks about UX as if it were equal to information architecture (IA): sitemaps, wireframes and all that. This may not be fundamentally wrong, but it narrows UX to something less than what it really is.

The perception might not be representative of our industry, but it illustrates that UX is perceived in different ways and that it is sometimes used as a buzzword for usability (for more, see Hans-Christian Jetter and Jens Gerken’s article “A simplified model of user experience for practical application”). But UX is not only about human-computer interaction (HCI), usability or IA, albeit usability probably is the most important factor that shapes UX.

Some research indicates that perceptions of UX are different. Still, everyone tends to agree that UX takes a broader approach to communication between the computer and human than traditional HCI (see Effie Lai-Chong Law et al’s article “Understanding, scoping and defining user experience: a survey approach”). Whereas HCI is concerned with task solution, final goals and achievements, UX goes beyond these. UX takes other aspects into consideration as well, such as emotional, hedonic, aesthetic, affective and experiential variables. Usability, in general, can be measured, but many of the other variables integral to UX are not as easy to measure.

But your overall experience follows the plan envisioned by the designers of that app or service. The truth is: the values of the creators are deeply ingrained in everything you interact with. The feeds you scroll through. The buttons you click. The imagery you see.

  • Someone decided the main Facebook experience was going to be based on an infinite feed, to keep you engaged as much as possible. No one at Facebook has created cards that show up every dozen posts reminding you to breathe, to go outside, or to call a friend.
  • Someone decided on Tinder you would wipe away the people you don’t want to engage further with. Can you swipe away unwanted people in real life?
  • Someone decided Alexa would take your orders without you having to say “please”, or “thanks”. Will that possibly affect the way you treat waiters (and other service providers) in a few years?

Choices, made by other people, can have a profound impact on how you experience a certain product.

That’s not to say one single designer at Facebook was responsible for that decision of creating its endless, addictive newsfeed. It was not part of an evil plan, created by evil people. But your environment’s fingerprints are also all over you. When companies reward employees who come up with ideas that will increase user engagement, they are setting the tone for what’s considered good vs. bad in their workplace. Features that increase the time users spend with the product will lead to recognition, salary increases, promotions.

As designers, we face those decisions several times over the course of our careers. The decision of adding an extra button to try to get higher conversion rates. The decision of softening a button label so users don’t feel like they are making a big commitment. The decision of auto-selecting a checkbox, reducing the size of the “unsubscribe” button, designing a pop-up. Our fingerprints are all over the screens we design, and the screens we design are our legacy in the world. Be mindful.

Beneficia & Metus of AI

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.

– Max Tegmark, President of the Future of Life Institute


From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.


In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last unless we learn to align the goals of the AI with ours before it becomes super intelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.


Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a super intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.


Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researchers at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now. Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control? FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former but to accelerate the latter, by supporting AI safety research.


A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of-of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let ’s clear up some of the most common myths.

AI myths


The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty. One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-sceptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by the year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.


Another common misconception is that the only people harbouring concerns about AI and advocating AI safety research are Luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-sceptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.


Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousnessevil, and robots.

If you drive down the road, you have a subjective experience of colours, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behaviour: the behaviour of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.


Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way?



Media Articles

Essays by AI Researchers

Research Articles

Research Collections

Case Studies

Blog posts and talks



  • Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
  • Centre for the Study of Existential Risk (CSER): A multidisciplinary research

    centerdedicated to the study and mitigation of risks that could lead to human extinction.

  • Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
  • Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
  • Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
  • Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
  • 80,000 Hours: A career guide for AI safety researchers.

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute;



Robots That Teach Each Other

What if robots could figure out more things on their own and share that knowledge among themselves?

Many of the jobs humans would like robots to perform, such as packing items in warehouses, assisting bedridden patients, or aiding soldiers on the front lines, aren’t yet possible because robots still don’t recognize and easily handle common objects. People generally have no trouble folding socks or picking up water glasses, because we’ve gone through “a big data collection process” called childhood, says Stefanie Tellex, a computer science professor at Brown University. For robots to do the same types of routine tasks, they also need access to reams of data on how to grasp and manipulate objects. Where does that data come from? Typically it has come from painstaking programming. But ideally, robots could get some information from each other.

That’s the theory behind Tellex’s “Million Object Challenge.” The goal is for research robots around the world to learn how to spot and handle simple items from bowls to bananas, upload their data to the cloud, and allow other robots to analyze and use the information.  Tellex’s lab in Providence, Rhode Island, has the air of a playful preschool. On the day I visit, a Baxter robot, an industrial machine produced by Rethink Robotics, stands among oversized blocks, scanning a small hairbrush. It moves its right arm noisily back and forth above the object, taking multiple pictures with its camera and measuring depth with an infrared sensor. Then, with its two-pronged gripper, it tries different grasps that might allow it to lift the brush. Once it has the object in the air, it shakes it to make sure the grip is secure. If so, the robot has learned how to pick up one more thing.

The robot can work around the clock, frequently with a different object in each of its grippers. Tellex and her graduate student John Oberlin have gathered—and are now sharing—data on roughly 200 items, starting with such things as a child’s shoe, a plastic boat, a rubber duck, a garlic press and other cookware, and a sippy cup that originally belonged to her three-year-old son. Other scientists can contribute their robots’ own data, and Tellex hopes that together they will build up a library of information on how robots should handle a million different items. Eventually, robots confronting a crowded shelf will be able to “identify the pen in front of them and pick it up,” Tellex says. Projects like this are possible because many research robots use the same standard framework for programming, known as ROS. Once one machine learns a given task, it can pass the data on to others—and those machines can upload feedback that will, in turn, refine the instructions given to subsequent machines. Tellex says the data about how to recognize and grasp any given object can be compressed to just five to 10 megabytes, about the size of a song in your music library. Tellex was an early partner in a project called RoboBrain, which demonstrated how one robot could learn from another’s experience. Her collaborator Ashutosh Saxena, then at Cornell, taught his PR2 robot to lift small cups and position them on a table. Then, at Brown, Tellex downloaded that information from the cloud and used it to train her Baxter, which is physically different, to perform the same task in a different environment. Such progress might seem incremental now, but in the next five to 10 years, we can expect to see “an explosion in the ability of robots,” says Saxena, now CEO of a startup called Brain of Things. As more researchers contribute to and refine cloud-based knowledge, he says, “robots should have access to all the information they need, at their fingertips.”

-Amanda Schaffer is a freelance journalist who writes about science and medicine for Slate, the New York Times, and other publications.