What is reinforcement learning? An AI researcher explains a key method of teaching machines – and how it relates to training your dog

Understanding intelligence and creating intelligent machines are grand scientific challenges of our times. The ability to learn from experience is a cornerstone of intelligence for machines and living beings alike.

In a remarkably prescient 1948 report, Alan Turing – the father of modern computer science – proposed the construction of machines that display intelligent behavior. He also discussed the “education” of such machines “by means of rewards and punishments.”

Turing’s ideas ultimately led to the development of reinforcement learning, a branch of artificial intelligence. Reinforcement learning designs intelligent agents by training them to maximize rewards as they interact with their environment.

As a machine learning researcher, I find it fitting that reinforcement learning pioneers Andrew Barto and Richard Sutton were awarded the 2024 ACM Turing Award.

What is reinforcement learning?

Animal trainers know that animal behavior can be influenced by rewarding desirable behaviors. A dog trainer gives the dog a treat when it does a trick correctly. This reinforces the behavior, and the dog is more likely to do the trick correctly the next time. Reinforcement learning borrowed this insight from animal psychology.

But reinforcement learning is about training computational agents, not animals. The agent can be a software agent like a chess-playing program. But the agent can also be an embodied entity like a robot learning to do household chores. Similarly, the environment of an agent can be virtual, like the chessboard or the designed world in a video game. But it can also be a house where a robot is working.

Just like animals, an agent can perceive aspects of its environment and take actions. A chess-playing agent can access the chessboard configuration and make moves. A robot can sense its surroundings with cameras and microphones. It can use its motors to move about in the physical world.

Agents also have goals that their human designers program into them. A chess-playing agent’s goal is to win the game. A robot’s goal might be to assist its human owner with household chores.

The reinforcement learning problem in AI is how to design agents that achieve their goals by perceiving and acting in their environments. Reinforcement learning makes a bold claim: All goals can be achieved by designing a numerical signal, called the reward, and having the agent maximize the total sum of rewards it receives.

Reinforcement learning from human feedback is key to keeping AIs aligned with human goals and values.

Researchers do not know if this claim is actually true, because of the wide variety of possible goals. Therefore, it is often referred to as the reward hypothesis.

Sometimes it is easy to pick a reward signal corresponding to a goal. For a chess-playing agent, the reward can be +1 for a win, 0 for a draw, and -1 for a loss. It is less clear how…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :