Meta’s AI chief publishes paper on creating ‘autonomous’ artificial intelligence | Tech Rasta


Yann LeCun, VP and AI chief at Meta, has published a new paper outlining his vision for “autonomous” AIs that can learn and experience the world in a more human-like way than today’s machine learning models.

In the nearly 70 years since AI was first introduced to the public, machine learning has grown in popularity and has since grown to obscurity. Yet, as quickly as we’ve come to rely on computing power, one question has haunted the field since its inception: Will these superintelligent systems one day gain enough intelligence to match or surpass humanity?

Despite some dubious recent claims—for example, an ex-Google engineer who claimed the chatbot got emotional before being fired—we’re a long way from that reality. Instead, one of the biggest obstacles to a robot overlord situation is that, compared to animals and humans, current AI and machine learning systems lack reason, a concept essential to the development of “autonomous” machine intelligence systems—that is, learning from observations of the real world directly rather than lengthy training sessions to perform a specific task. AI that can learn on the fly.

Now new research by LeCun, published earlier this month on Open, proposes a way to solve this problem by training learning algorithms to learn more efficiently, as AI has proven not very good at predicting and planning for changes. In the real world. On the other hand, humans and our animal companions are able to gain an enormous amount of knowledge about how the world works through observation and with remarkably little physical interaction.

LeCun, who not only leads the AI ​​efforts at Meta, but is also a professor at New York University, has spent his storied career developing the learning systems upon which many modern AI applications today rely. Trying to give these machines better insight into how the world works, he can arguably be hailed as the father of the next generation of AI. In 2013, he founded the Facebook AI Research (FAIR) group, Meta’s first foray into experimenting with AI research, before retiring a few years later as the company’s chief AI scientist.

Since then, Meta has achieved varying degrees of success in its quest to dominate the ever-evolving field. In 2018, their researchers trained an AI to mirror eyebrows in hopes of making it easier for users to edit their digital photos. Earlier this year, the Meta chatbot BlenderBot3 (which proved surprisingly malicious towards its creator), sparked debate over AI ethics and biased data. More recently, Meta’s Make-a-Video tool can animate text and single and paired images into videos, spelling more bad news for the once promising rise of AI-generated art.

For example, teenagers can learn to drive with just a few dozen hours of repetition and without trying to crash themselves. On the other hand, machine learning systems need to be trained with very large amounts of data before they can complete the same task.

“A car has to run down hills a number of times before it realizes it’s a bad idea,” LeCun said when presenting his work at UC Berkeley on Tuesday. “And then a few thousand more times it learns how not to run off the cliff.” That difference, LeCun notes, is that humans and animals have common sense.

The concept of common sense can be boiled down to include practical judgment, which LeCun describes in the paper as a set of patterns that help an organism judge the difference between what is likely, what is possible, and what is impossible. Such skill allows a person to explore his environment, fill in missing information, and imagine new solutions to unknown problems.

However, since scientists have yet to imbue AI and machine learning algorithms with any of these capabilities, we seem to take the gift of common sense for granted. In the same speech, LeCun pointed out that many modern training methods, such as reinforcement learning techniques—a training method based on rewarding desirable behaviors and punishing undesirable ones—actually fail to match human reliability. World affairs.

“It’s a practical problem because we really want machines with common sense. We want self-driving cars, we want domestic robots, we want intelligent virtual assistants,” LeCun said.

So with the goal of advancing AI research over the next decade, LeCun’s paper proposes a framework that works to reduce the number of actions a system needs to successfully learn and perform a given task.

Similar to how different areas of the brain are responsible for different functions of the body, LeCun suggests a model for increasing autonomous intelligence consisting of five separate, yet configurable modules. One of the most complex parts of the proposed structure, theWorld Model ModuleWorks to predict the state of the world, as well as predict actions and other world scenes like a simulator. But by using this single world model engine, knowledge about how the world works can be easily shared across tasks. In some ways, it may resemble memory.

Autonomous systems still have a lot of hard work to do before they can learn to deal with uncertainty, but in a chaotic and unpredictable world like ours, it’s a problem we’ll undoubtedly have to tackle sooner rather than later. . But for now, dealing with that mess is part of what makes us human.

Meta did not respond to a request for comment about the work.


Source link