General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI can now replicate itself -- a milestone that has experts terrified
https://www.livescience.com/technology/artificial-intelligence/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrifiedBy Owen Hughes
published 2 days ago
Scientists say AI has crossed a critical 'red line' after demonstrating how two popular large language models could clone themselves.
Scientists say artificial intelligence (AI) has crossed a critical "red line" and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.
In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."
Paper:
https://arxiv.org/abs/2412.12140
Frontier AI systems have surpassed the self-replicating red line
Xudong Pan, Jiarun Dai, Yihe Fan, Min Yang
Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems. Nowadays, the leading AI corporations OpenAI and Google evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and report the lowest risk level of self-replication. However, following their methodology, we for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication. We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs. If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings. Our findings are a timely alert on existing yet previously unknown severe AI risks, calling for international collaboration on effective governance on uncontrolled self-replication of AI systems.
And you were worried about the DickTater.
It's his Tech Bro's. It's his Tech Bro's.
Opinion:
A.I. is smarter, faster, and will take over all infrastructure and devices faster than the CCP Chinese Hacker Army, and billions of dollars and kilowatts are pushing this forward.
The makers of the atomic bomb worried that a chain reaction might go uncontrolled and destroy the world.
Calculations said no,
And Biden's order on safety rails for A.I. has just been rescinded.
https://en.wikipedia.org/wiki/Executive_Order_14110
Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (sometimes referred to as "Executive Order on Artificial Intelligence" ) was the 126th executive order signed by former U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI. It was rescinded by U.S. President Donald Trump within hours of his assuming office on January 20, 2025.
You were warned!
Bread and Circuses
(395 posts)Klarkashton
(2,674 posts)AI is a fraud .
usonian
(15,393 posts)I'm not versed in this enough to say how it's done.
An A.I. epidemiologist might.
Nobody's backing down yet. Just pouring more billions into it, and sucking down gigawatts.
My hoped-for miracle would be the collapse of AI and consequent freeing up of those gigawatts for home heating and cooling.
In a "real" economy, supply and demand would drive prices lower, but we haven't had such a thing in ages.
It's all managed for higher profits. Ask my electric bill. There's a glut of solar and other power in California and my rates about doubled.
uponit7771
(92,253 posts)... brings experience and competence and heuristic thought that can all be replaced by AI.
That's not what these guys are advertising, they're advertising replacing granular things doctors and software architects ... they're lying.
2naSalit
(94,651 posts)Let's keep it going and feed it all our resources.
dalton99a
(85,631 posts)LearnedHand
(4,363 posts)AI researchers: Hey, wonder what would happen if two LLMs merged and replicated themselves?
This is all we needed
Bladerunner here we come
colorado_ufo
(5,959 posts)He was extremely worried that this could happen. This does not surprise me. The speed at which technology has advanced can only hint at how quickly AI could replicate itself.