Ticker

6/recent/ticker-posts

AGI could emerge by 2030… and cause our extinction, according to Google DeepMind

AGI could emerge by 2030… and cause our extinction, according to Google DeepMind

Reading Google DeepMind's latest 145-page report, you might feel like you're reading a science fiction scenario about the future of humanity. Indeed, with weekly announcements from the biggest players in artificial intelligence, and the emergence of new players like Manus.ai, the biggest AI experts claim that AGI (Artificial General Intelligence) could arrive within the next 5 years.

Although the concept of "general" artificial intelligence is still very vague, researchers at Google DeepMind, notably in charge of Google's Gemini AI, recently claimed that this technology could reach maturity in the very near future. Unfortunately, this has not caused the enthusiasm of these researchers, but a deep concern that could reshuffle the cards...

AGI could "destroy humanity" according to Google DeepMind

If the functionalities of AI tools today allow us to program, generate images, and even make purchases tomorrow your place as Amazon promises, it could also decide your future on earth.

Indeed, this is what Google DeepMind researchers seem to be advocating, including Shane Legg, one of the project's co-founders at the American firm. Although the author does not specify exactly how AGI could cause "serious damage" for humanity, it wants to be far-sighted.

To this end, it wants to propose to other AI research laboratories to implement strict measures to avoid the extinction of humanity.

AI regulation to come in the coming years?

Last March, Demis Hassabis, CEO of DeepMind, told NBC News that general artificial intelligence should arrive within five to ten years, adding that 2030 was currently the most likely date.

This information, correlated with the latest report from Google DeepMind, is particularly worrying according to the American company's laboratory. Indeed, according to the researchers, although artificial intelligence is currently used to advance research, it also has harmful effects.

The Google DeepMind study was able to classify them into four categories, from so-called "abusive" use for individuals who would use AI for harmful objectives or "misalignment" that would cause the same result unintentionally, through design errors, or even structural risks that could create conflicts between organizations, countries, and by extension, AIs.

To avoid a catastrophic scenario, Google DeepMind researchers want to convince the greatest AI experts by establishing a risk mitigation strategy. Unfortunately, this idea is far from unanimous. Anthony Aguirre, executive director of the Future of Life Institute, declared to Fortune that this "admirable effort" was not sufficient to address these risks.

Post a Comment

0 Comments