Researchers trained this AI to ‘think’ like a baby – here’s what happened

Let’s in a world filled with opposing views draw attention to something we can all agree on: If I show you my pen, and then hide it behind my back, my pen still exists – even if you can no longer see it. We can all agree that it still exists and probably has the same shape and color as it did before it went behind my back. This is just common sense.

These common sense laws in the physical world are universally understood by humans. Even two-month-old infants share this understanding. But scientists are still amazed at some aspects of how we gain this basic understanding. And we have not yet built a computer that can measure up to the common sense of a typically developing infant.

New research by Luis Piloto and colleagues at Princeton University – which I review for an article in Nature Human Behavior – is taking a step towards filling this gap. The researchers created a deep-learning artificial intelligence (AI) system that gained an understanding of some common sense laws in the physical world.

Regards, humanoids

Subscribe to our newsletter now for a weekly summary of our favorite AI stories in your inbox.

The results will help build better computer models that simulate the human mind by approaching a task with the same assumptions as an infant.

Childish behavior

Typically, AI models start with a blank whiteboard and are trained on data with many different examples from which the model constructs knowledge. But research on infants suggests that this is not what babies do. Instead of building knowledge from scratch, infants start with some principled expectations of objects.

For example, they expect that if they take care of an object that is then hidden behind another object, the first object will continue to exist. This is a core assumption that starts them in the right direction. Their knowledge then becomes more refined with time and experience.

The exciting discovery from Piloto and colleagues is that a deep-learning AI system based on what babies do outperforms a system that begins with a blank whiteboard and attempts to learn based on experience alone.

Cube slides and balls into walls

The researchers compared both approaches. In the blank-slate version, the AI ​​model got several visual animations of objects. In some examples, a cube would slide down a ramp. In others, a ball bounced into a wall.

The model discovered patterns from the various animations and was then tested on its ability to predict results with new visual animations of objects. This performance was compared to a model that had “principled expectations” built-in before experiencing any visual animations.

These principles were based on the expectations infants have about how objects behave and interact. For example, infants expect two objects not to pass through each other.

If you show an infant a magic trick where you violate this expectation, they can discover the magic. They reveal this knowledge by looking significantly longer at events with unexpected or “magical” outcomes compared to events where the outcomes are expected.

Infants also expect that an object should not just be able to blink in and out of existence. They can also detect when this expectation is being violated.

Piloto and colleagues found that the deep learning model, which started with a blank whiteboard, did a good job, but the model based on object-centered coding inspired by infant cognition performed significantly better.

The latter model was able to more accurately predict how an object would move, was more successful in applying expectations to new animations, and learned from a smaller set of examples (for example, it managed this after the equivalent of 28 hours of video).

An innate understanding?

Clearly, learning through time and experience is important, but that’s not the whole story. This research conducted by Piloto and colleagues contributes insight into the age-old question of what can be innate in humans and what can be learned.

Beyond that, it defines new boundaries for what role perceptual data can play when it comes to artificial systems that acquire knowledge. And it also shows how studies of babies can help build better AI systems that simulate the human mind.The conversation

Article by Susan Hespos, Department of Psychology at Northwestern University Evanston, Illinois, USA and Professor of Infant Studies at MARCS Institute, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Leave a Comment