Apple Research questions AI reasoning

Apple Research questions AI reasoning

A paper published by Apple researcher Mehrdad Farajtabar and others questioned the reasoning ability of Large Language Models (LLMs), arguing that LLMs so-called “reasoning” ability is actually just complex pattern matching and not true logical reasoning.
The research team developed the GSM-Symbolic tool to generate symbol templates based on the GSM 8K test set, and found that current LLM open source models such as Llama, Phi, Gemma, Mistral, and closed source models such as GPT-4o and o1 series are very sensitive to changes in proper nouns and numbers, showing insufficient understanding of mathematical concepts.
Experimental results show that even with the increase in parameters and data volume, LLM’s reasoning ability has not been substantially improved, but has only become a “better pattern matcher.”

Six Apple AI researchers (one of whom was an intern) published a paper “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models” on the preprint platform arxiv, and they found that large models cannot formally reason. Researchers say the GSM 8K benchmark is widely used to assess the mathematical reasoning capabilities of large models on elementary school level questions. The performance of large models on GSM 8K has improved significantly in the past few years, but have their mathematical reasoning capabilities really improved?
Researchers used symbol templates to create an improved benchmark GSM-Symbolic that allows for a more controllable assessment of the reasoning capabilities of large models. The results show that the large model does not have real logical reasoning capabilities. Just changing a certain value of the problem or adding a clause will significantly reduce the performance of the large model.

Original text:https://arxiv.org/pdf/2410.05229

Meta researchers believe that big models are dumber than cats

Yann LeCun, a senior researcher at Meta and a professor at New York University, believes that concerns about AI threatening humans are nonsense. He likes to use the analogy of cats, which have mental models of the physical world, lasting memories, and limited reasoning and planning abilities. The most advanced large models do not have all these. Yann LeCun won the Turing Award in 2018 for her contributions to deep learning along with Yoshu Bengio and Geoffrey Hinton, who also won the Nobel Prize this year. LeCun believes that AI is a powerful tool, but today’s AI cannot be called intelligent in any sense. However, many people in the technology industry, especially AI startups, are credulous predicting its recent progress in absurd ways. He believes that creating universal AI may take decades, and today’s mainstream methods cannot allow us to achieve this goal. Large models are just predicting the next word in text, benefiting from their huge memory capacity, and they seem to be reasoning, but are actually just mechanically repeating the information that has been trained.

Original text:https://tech.slashdot.org/story/24/10/13/2220258/ai-threats-complete-bs-says-meta-senior-research-who-thinks-ai-is-dumber-than-a-cat

The man learned the news of the breakup through a text message from Apple AI

New York programmer Nick Spreen learned of the breakup on Wednesday through the text message summary feature provided by the iPhone 15 Pro AI feature Apple Intelligence beta.
He shared the news on social media, and the AI summarized multiple text messages sent by his girlfriend, announcing the breakup, saying that he wanted to get his belongings back from the apartment.
Apple announced Apple Intelligence in June this year and is currently conducting public beta testing.
Spreen ran a beta version on its own iPhone. It is similar to a streamlined version of ChatGPT, providing a summary version by reading text messages received by users.

Original text:https://entertainment.slashdot.org/story/24/10/10/228207/man-learns-hes-being-dumped-via-dystopian-ai-summary-of-texts

Adobe begins to launch generative artificial intelligence video tools

On the 14th local time, Adobe said it had begun to publicly release an artificial intelligence model that can generate videos based on text prompts, joining the ranks of more and more companies that are trying to use generated artificial intelligence to subvert film and television production. The technology, known as the Firefly Video Model, will compete with Sora, which OpenAI launched earlier this year.
Adobe will begin making the tool available to users who have joined the waiting list, but did not disclose a specific release date. Adobe said it has integrated a feature into its video editing software Premiere that allows users to use generative artificial intelligence to extend video clips.
Other tools available online allow users to create videos based on text prompts and existing images.

Links to the original text in this article are all in the description column below the video.
Thank you for watching this video. If you like it, please subscribe and like it. thank

Oil tubing:

Scroll to Top