Turing Award winner Yang Li-kun: Starship is not a scientific achievement, but an engineering achievement; AI is now inferior to cats in all aspects, such as intelligence, memory, and curiosity.
Since 1986, the Léon Bottou who has known LeCun has said, LeCun is "appropriately stubborn" — that is to say, he is willing to listen to the opinions of others, but has a firm determination in pursuing what he believes to be the right approach to building AI.
Translation | Eric Harrington
Recently, The Wall Street Journal interviewed Yann LeCun, also known as the "father of CNN" and one of the "three godfathers of artificial intelligence". The interview discussed LeCun's criticism that the current Large Language Models (LLMs) are not intelligent enough, a perspective which can be found frequently on his Twitter account. This interview has sparked a high level of discussion on Hacker News.
LeCun is not only one of the important driving forces behind the current artificial intelligence trend, but also a rare voice among many experts because now many AI experts are considered to exaggerate the power and potential risks of artificial intelligence. While other renowned scholars claim that we are approaching computers that can surpass or even replace human intelligence, LeCun has become one of the most qualified skeptics.
If you often browse tweets on X and follow LeCun, you will find that he is a rather straightforward person, often engaging in intense debates on social media with supporters who advocate for the superhuman potential of generative AI, including Elon Musk and his two peers who share the title "father of AI" - Geoffrey Hinton and Yoshua Bengio. Especially with Hinton, a friend of LeCun for almost forty years, who recently won the Nobel Prize in Physics and has repeatedly warned of the existential threats posed by AI. One of the most memorable instances was when Hinton appeared on the screen at the Beijing Zhiyuan Conference one year, reminding every Chinese guest present of the risks associated with AI.
However, LeCun believes that although today's AI models are very useful, they are far from being comparable to our pets, let alone being comparable to humans.
In this interview, LeCun was asked if we should be worried that AI will become so powerful that it poses a threat to us, and he humorously responded, "You'll have to forgive my French, but that's complete nonsense."
LeCun was born in the northern suburbs of Paris, and his interest in AI partly stems from the rebellious AI character HAL 9000 in Stanley Kubrick's 1968 science fiction classic "2001: A Space Odyssey." After obtaining his doctoral degree from the University of Paris, he worked at the renowned Bell Labs, where a series of significant inventions were born, ranging from transistors to lasers. In 2003, he joined New York University as a computer science professor, and a decade later became the director of AI research at what was then Facebook.
LeCun's Chinese name is "杨立昆" (Yang Likun). In the early days, the Chinese tech community would translate names like "LeCun" as "严勒村" or similar, but later, LeCun displayed a calligraphy image of the name "杨立昆" during a presentation. This not only standardized the pronunciation of his French name for us but also provided him with a Chinese name.
In 2019, LeCun, along with Hinton and Bengio, was jointly awarded the highest honor in the field of computer science, the Turing Award. This honor earned them the nickname "AI pioneers". The award recognizes their pioneering work in the field of neural networks, which underpins many of today's most advanced AI systems, from OpenAI's ChatGPT to Tesla's recent impressive self-driving cars.
In real life, LeCun has a charm that is hard to resist: he is witty and humorous, always able to straightforwardly express his profound insights into this field. Despite being 64 years old, he still gives people a sense of being both fashionable and slightly casual, which is very consistent with his past experience as a Parisian and his current identity as a professor at New York University.
He often wears classic black Ray-Ban glasses, which are almost identical to the AI-driven glasses that Meta is currently promoting. It's worth mentioning that LeCun himself once had a pair of AI-driven Ray-Ban glasses, but accidentally dropped them into the sea during a sailing trip, rendering them unusable. Sailing is one of LeCun's major hobbies, reflecting his passion for nature and adventure.
A Wall Street Journal reporter said that when he sat in a conference room at a Meta satellite office in New York City, LeCun exuded a warm and confident aura involuntarily. With a smile that made you feel like a close friend, he expressed sharp opinions, making it seem like he was sharing an inside joke.
Relying on his achievements and leadership position in the Meta AI research lab, LeCun's criticisms often carry significant weight.
Today, LeCun is still co-authoring papers with his PhD students at New York University, while at Meta, he serves as the Chief AI Scientist overseeing one of the world's most well-funded AI research institutions. He regularly communicates with the CEO Mark Zuckerberg through WhatsApp, as the latter positions Meta as a major disruptive force in AI prosperity against tech giants like Apple and OpenAI, earning widespread praise from the developer community through the release of the open-source large model series Llama.
LeCun believes that AI is a powerful tool. During the interview, he gave many examples to illustrate how AI will become a crucial component of Meta and increase its scale and revenue to a value of approximately $1.5 trillion. From real-time translation to content moderation, AI plays an important role in various aspects of Meta.
LeCun firmly believes that current AI does not possess any substantial intelligence, and he thinks that many people in various fields, especially those in AI startups, are speculating on its recent developments in a way that he finds absurd. Many are hopeful that AI based on large language models, such as OpenAI's models, can create so-called Artificial General Intelligence (AGI) in the short term, surpassing human-level intelligence widely. Sam Altman from OpenAI actually stated in September that we could have AGI in "a few thousand days." Elon Musk, on the other hand, once mentioned that this might be achieved by 2026.
This spring, he got into a fierce argument on X platform because of Musk -- it all started when Musk announced that xAI had secured a $6 billion funding, and posted a job advertisement mentioning, "To join xAI, one must uphold the highest rigor in pursuing the path of truth, unaffected by popular trends or political correctness."
Translation: As a result, Musk's statement "pursuing truth" has instead provoked LeCun. The latter pointed out that Musk is constantly spreading conspiracy theories on platform X (referring to AI destroying humanity in the future), so how can he claim to be pursuing truth?
War is about to break out as Musk and LeCun first compare their academic research achievements: Musk is considered as an "entrepreneur", and he criticizes LeCun's previous papers as "outdated".
"What have you done related to science since you call yourself a scientist?"
"I have published 80 technical papers this year."
"That you'd better give it all you've got! (Try Harder)"
"You're not my boss, so why do you care?"
Next, the argument between the two escalated out of control, shifting from a scientific level to personal attacks and political criticisms. As a result, for the following several months, LeCun kept on criticizing Musk's political views incessantly.
However, it is worth noting that, with the news of the successful fifth test flight of Starship on October 13th being released, LeCun made a surprising turn and posted congratulations under Musk's tweet.
The comments section below must be exploding. A popular comment asked him: "Didn't you say that Musk has no scientific achievements?"
LeCun answered: "Indeed, this is not scientific."
"What is this?"
"This is Engineering."
It can be seen that LeCun partially recognizes Musk's engineering achievements. But there are also people who do not think so:
Apart from Musk, LeCun also publicly disagreed with his close friends Hinton and Bengio's repeated warnings about the danger that AI poses to humanity. Bengio once stated that although he agrees with many of LeCun's views, there is a disagreement between them regarding whether companies can ensure that future superhuman AI will not be maliciously used or develop its own malicious intent. "I hope he is right, but I think we should not solely rely on competition and profit motives between companies to protect the public and democracy," Bengio said. "That's why I believe we need government intervention."
LeCun believes that these statements might be premature. In May of this year, the resignation of OpenAI researchers sparked a discussion on the importance of controlling superintelligent AI, to which LeCun immediately responded: "In my view, before we 'urgently try to figure out how to control AI systems that are much smarter than we are,' we should first have some initial clues about designing systems smarter than house cats."
LeCun likes to use cats as metaphors. Earlier this year at the World Governments Summit, he made exactly the same metaphor. After all, cats possess a psychological model of the physical world, long-term memory, a certain reasoning ability, and planning skills, all of which do not exist in the current "cutting-edge" AI, including Meta's own products.
Since 1986, Léon Bottou, who has known LeCun, has said that LeCun is "appropriately stubborn" - that is to say, he is willing to listen to others' viewpoints but maintains a strong determination in pursuing the AI methods he believes to be correct.
LeCun's former doctoral student Alexander Rives (who has now founded his own AI startup) also mentioned that his teacher LeCun's challenges were deliberate. "He has the ability to identify the gaps in thinking within the field and point them out."
LeCun believes that true artificial general intelligence is a worthwhile goal to pursue, which is also the direction that Meta is striving for. "In the future, when people interact with their AI systems, whether it's smart glasses or other devices, we need these AI systems to possess human-level characteristics, truly have common sense, and behave like a human assistant," he said.
However, creating such a capable AI could take decades, and the current primary methods may not be able to achieve this goal. The prosperity of generative AI is driven by large language models and similar systems, which are trained on massive amounts of data to mimic human expressions. As each generation of models becomes more powerful, some experts have concluded that by investing more chips and data into the development of future AI, they will become increasingly powerful, eventually matching or even surpassing human intelligence. This is the rationale behind the substantial investment in building larger-scale dedicated chips for training AI.
LeCun believes that the current issue with AI systems lies in their design approach rather than their scale.
"No matter how many GPUs are packed into the data centers of technology giants worldwide, today's AI cannot achieve human-level general intelligence." He is betting on AI with fundamentally different ways of working that will pave the way for us to reach human-level intelligence. The future AI based on these assumptions may take various forms, but one of the projects that LeCun is excited about is the work that Meta AI is doing to extract videos from the real world. The idea is to create models that build a world model through visual information, similar to how young animals learn. Large language models like ChatGPT and other robots may one day only play a small role in systems that have common sense and human-like abilities, which will use a range of other technologies and algorithms to build.
Today's models are actually only predicting the next word in the text. But because they are so good at this, they fooled us. Due to their huge memory capacity, they appear to be reasoning, but in reality, they are just repeating data they have been trained on. "We tend to think that entities or people who can express themselves or manipulate language are smart - but this is not true," LeCun said, "Being able to manipulate language does not make them smart, and this is basically what the large models are demonstrating."
By doing so, LeCun is not only pushing the limits of current AI but also driving the entire industry towards a deeper level of understanding, ensuring that future AI technologies can truly serve the long-term impact of human society.
Original text: https://www.wsj.com/tech/ai/yann-lecun-ai-meta-aa59e2f5
Translation: https://www.wsj.com/tech/ai/yann-lecun-ai-meta-aa59e2f5