We should build a baby-brained artificial intelligence

Toddler smarts will drive AI innovation. If we want our machines to possess anything approximating human intelligence, maybe we should think about giving them a childhood, too- Christie Hemm Klok

Alison Gopnik psychologist

Alison Gopnik’s career began with a psychology experiment she now considers ridiculous. Aiming to understand how 15-month-olds connect words with abstract concepts (daddy = caregiver), she decided to visit nine kids once a week for a year. The then ­Oxford graduate student would record everything they said as part of her dissertation. “It was absurd for a million reasons,” says Gopnik, holed up on a winter Friday in her office at the University of California at Berkeley, where she is a professor of developmental psychology. “If a child had moved away if there weren’t any takeaways after the year, or any number of things, all that work would have been gone,” she says, before adding, “I would never allow a ­student of mine to do anything like that today.”

Though her experiment didn’t solve any language-acquisition mysteries, it did overturn her assumptions about childhood learning and intelligence—and it altered her career path. Now her research has drawn the interest of artificial-intelligence scientists who want to adopt her insights to their machine-learning algorithms. What she learned about kids’ smarts while a grad student still holds sway—for her field and possibly theirs. “Instead of thinking about children as these kinds of starter adults, I realized they were profoundly different,” says Gopnik, now 62, and with her own grown children and grandchildren. “The way they use words, the meanings they express, the way they express them—none of it matched how adults think or speak.”

Today, Gopnik oversees her own cognitive development lab at UC Berkeley and is the author of several books on early childhood learning and development. She’s a TED alum, a Wall Street Journal columnist, and has attained that singular ­intellectual height—crossing over into pop culture, by appearing on shows like Good Morning America and The Colbert Report. Gopnik’s message: Adult cognitive primacy is an illusion. Kids, her research shows, are not proto-adults with fruit-fly-like attention spans, but in fact our occasional superiors. “Children, even very young children,” she says, “are in many ways smarter, more inventive, and better at learning than adults.”

The reason: size and shape matter. ­Research shows that the bulk and structure of a child’s brain confer cognitive strengths and weaknesses. Same goes for adults. For example, a developed prefrontal cortex allows grown-ups to focus, plan, and control our impulses: valuable skills that let us write magazine articles and avoid jail time. But evidence suggests a developed cortex can also make it hard to learn new or surprising concepts and can impede creative thinking. Toddler brains, constantly abuzz with fresh neural connections, are more plastic and adaptive. This makes them bad at remembering to put on pants but surprisingly good at solving abstract puzzles and extracting unlikely ­principles from extremely small amounts of information.

These are handy skills. It turns out a lot of smart people want to think this way—or want to build machines that do. Artificial-intelligence researchers at places like Google and Uber hope to use this unique understanding of the world’s most powerful neural-learning apparatus—the one between a toddler’s ears—to create smarter self-driving cars. Coders can create software that beats us at board games, but it’s ­harder to apply those skills to a different task—say, traffic-­pattern analysis. Kids, on the other hand, are genius at this kind of generalized learning. “It’s not just that they figure out how one game or machine works,” says Gopnik. Once they’ve figured out how your iPhone works, she says, they’re able to take that information and use it to figure out the childproof sliding lock on the front door.

alison gopnik ted talks

BABY TALK- Kids, Gopnik tells people, are the R&D unit of our species.

Courtesy Ted Talks/Youtube.com

Cracking the codes of these little code breakers wasn’t Gopnik’s original career plan. As an undergrad, she began studying life’s big problems, toiling in the field of analytic philosophy. Back then, none of her peers pondered the thinking of kids. But Gopnik became ­convinced kids were key to unlocking one of the oldest epistemological queries: How do we know stuff about the world around us? Borrowing the brain-as-computer model, Gopnik sought to ask questions about the software running this little human machine, allowing it to perform complicated functions. “Kids are the ones doing more generalized learning than anybody else,” she says, “so why wouldn’t you want to understand why they’re so good at it?”

The advantages of installing a preschool perspective into machines, she says, can be understood by considering two popular, but opposing, AI strategies: bottom-up and top-down learning. The former works the way you expect: Say you want a computer to learn to recognize a cat. With a bottom-up or “deep-learning” strategy, you’d feed it 50,000 photos of furry felines and let it extract statistics from those examples. A top-down strategy, on the other hand, requires just one example of a cat. A system using this strategy takes that single picture, builds a model of “catness” (whiskers, fur, vertical pupils, etc.), and then uses it to try to identify other cats, revising its cat hypothesis as it goes, much like a scientist would.

Children employ both methods at once. They’re good at figuring out things and extracting statistics, says Gopnik. And they use that data to come up with new theories and structured pictures of the world. Successfully distilling both knowledge-building approaches into algorithms might produce artificial intelligence that can finally do more than just beat us at Go and recognize animals. It might also, Gopnik hopes, change outmoded ideas that we all seem to share about intelligence. “We still tend to think that a 35-year-old male professor is the ultimate goal of human cognition,” she says, “that everything else is just leading up to or deteriorating from that cognitive peak.”

That model doesn’t make sense for a variety of reasons. Studies from fields like evolutionary biology, neuroscience, and developmental psychology suggest we simply have different cognitive strengths and strategies at different stages of our lives. “Children will have one set of ideas about how people and the world work when they’re 2, and then another set when they’re 3, and another set when they’re 5,” says Gopnik. “It’s like they’re actively trying to think up a coherent picture of the world around them, and then constantly changing that picture based on the observations they make.”

That frenetic hypothesis formation—and ongoing reformation—isn’t a bug; it’s a ­highly desired feature. And if we want our machines to possess anything approximating human intelligence, maybe we should think about giving them a childhood too.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s