Meta AI chief says large language models won’t reach human intelligence

Meta’s artificial intelligence chief said the large language models that power generative AI products like ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radically alternative approach to creating ‘superintelligence’ in machines .

Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had “a very limited understanding of logic. . . do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term, and cannot plan. . . hierarchical”.

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest for human-level intelligence, as these models can only answer clues accurately if given the right training data and are therefore “intrinsically unsafe.” ”.

Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said it could take a decade to realize this vision.

Meta has poured billions of dollars into developing its own LLMs as generative AI has exploded, aiming to overtake rival tech groups including Microsoft-backed OpenAI and Alphabet’s Google.

LeCun leads a team of approximately 500 employees in Meta’s Fundamental AI Research (Fair) lab. They are working to create AI that can develop common sense and learn how the world works in similar ways to humans, in an approach known as “world modeling.”

The Meta AI chief’s experimental vision is a potentially risky and costly gamble for the social media group at a time when investors are craving quick returns on AI investments.

Last month, Meta lost nearly $200 billion in value as CEO Mark Zuckerberg vowed to increase spending and transform the social media group into “the world’s leading AI company,” leaving Wall Street investors worried about rising costs with little immediate sales potential.

“We’re at the point where we think we’re on the cusp of perhaps the next generation of AI systems,” LeCun said.

LeCun’s comments come as Meta and its rivals push forward with ever-improving LLMs. Figures like OpenAI chief Sam Altman believe they represent a crucial step toward creating artificial general intelligence (AGI) – the point at which machines have greater cognitive capabilities than humans.

OpenAI released its new, faster GPT-4o model last week, and Google unveiled a new “multimodal” artificial intelligence agent that can answer real-time questions about video, audio and text, called Project Astra, powered by an improved version of its Gemini model.

Meta also launched its new Llama 3 model last month. The company’s head of global affairs, Sir Nick Clegg, said the latest LLM had “vastly improved capabilities such as reasoning” – the ability to apply logic to questions. For example, the system suspects that someone suffering from a headache, sore throat and runny nose has a cold, but can also recognize that allergies can cause the symptoms.

However, LeCun said this evolution of LLMs was superficial and limited, with the models only learning when human engineers stepped in to train them based on that information, rather than AI coming to a conclusion organically like humans.

“It certainly seems like a rationale to most people – but it’s mostly about exploiting the knowledge gathered from a lot of training data,” LeCun said, but added: “[LLMs] are very useful despite their limitations.”

Google DeepMind has also spent several years exploring alternative methods for building AGI, including methods such as reinforcement learning, where AI agents learn from their environment in a game-like virtual environment.

At an event in London on Tuesday, DeepMind’s chief Sir Demis Hassabis said what was missing from language models was “they didn’t understand the spatial context you’re in. . . so that ultimately limits their usefulness.”

Meta founded its Fair Lab in 2013 to pioneer AI research, hiring leading academics in the field.

However, in early 2023, Meta created a new GenAI team, led by Chief Product Officer Chris Cox. It poached many AI researchers and engineers from Fair, leading the work on Llama 3 and integrating it into products such as the new AI assistants and image generation tools.

The creation of the GenAI team came as some insiders claimed that an academic culture within the Fair Lab was partly responsible for Meta’s late arrival in the generative AI boom. Zuckerberg, under pressure from investors, has pushed for more commercial applications of AI.

However, according to people close to the company, LeCun is still one of Zuckerberg’s top advisors, due to his track record and reputation as one of the founding fathers of AI, having won a Turing Award for his work in the field neural networks.

“We refocused Fair on the long-term goal of human-level AI, mainly because GenAI is now focused on the things we have a clear path to,” LeCun said.

“[Achieving AGI] it is not a product design problem, it is not even a technology development problem, it is primarily a scientific problem,” he added.

LeCun first published a paper on his views on world modeling in 2022, and Meta has since released two research models based on the approach.

Today he said Fair was testing different ideas to achieve human-level intelligence because “there’s a lot of uncertainty and research in this, [so] we cannot say which one will succeed or ultimately be captured.”

Among them, LeCun’s team feeds systems with hours of video and deliberately omits frames, then lets the AI ​​predict what will happen next. This is to mimic how children learn by passively observing the world around them.

He also said Fair was researching building “a universal text encoding system” that would allow a system to process abstract representations of knowledge in text, which could then be applied to video and audio.

Some experts doubt whether LeCun’s vision is viable.

Aron Culotta, an associate professor of computer science at Tulane University, said that common sense has long been “a thorn in the side of AI,” and that teaching causality of models is a challenge, making them “sensitive to these unexpected failures”.

A former Meta AI employee described the global modeling as “vague fluff,” adding, “It feels like a lot of flag planting.”

Another current employee said Fair has yet to prove itself as a true rival to research groups like DeepMind.

In the longer term, LeCun believes the technology will power AI agents that allow users to interact via wearable technology, including augmented reality or “smart” glasses and electromyography (EMG) “bracelets.”

“[For AI agents] To be truly useful, they must have something resembling human-level intelligence,” he said.

Additional reporting by Madhumita Murgia in London

Video: AI: a blessing or a curse for humanity? | FT Tech

Leave a Reply

Your email address will not be published. Required fields are marked *