Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)AI can now replicate itself -- a milestone that has experts terrified [View all]
https://www.livescience.com/technology/artificial-intelligence/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrifiedBy Owen Hughes
published 2 days ago
Scientists say AI has crossed a critical 'red line' after demonstrating how two popular large language models could clone themselves.
Scientists say artificial intelligence (AI) has crossed a critical "red line" and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.
In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."
Paper:
https://arxiv.org/abs/2412.12140
Frontier AI systems have surpassed the self-replicating red line
Xudong Pan, Jiarun Dai, Yihe Fan, Min Yang
Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems. Nowadays, the leading AI corporations OpenAI and Google evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and report the lowest risk level of self-replication. However, following their methodology, we for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication. We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs. If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings. Our findings are a timely alert on existing yet previously unknown severe AI risks, calling for international collaboration on effective governance on uncontrolled self-replication of AI systems.
And you were worried about the DickTater.
It's his Tech Bro's. It's his Tech Bro's.
Opinion:
A.I. is smarter, faster, and will take over all infrastructure and devices faster than the CCP Chinese Hacker Army, and billions of dollars and kilowatts are pushing this forward.
The makers of the atomic bomb worried that a chain reaction might go uncontrolled and destroy the world.
Calculations said no,
And Biden's order on safety rails for A.I. has just been rescinded.
https://en.wikipedia.org/wiki/Executive_Order_14110
Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (sometimes referred to as "Executive Order on Artificial Intelligence" ) was the 126th executive order signed by former U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI. It was rescinded by U.S. President Donald Trump within hours of his assuming office on January 20, 2025.
You were warned!
11 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
I've been expecting something like this. But I thought it would be 50 years from now..
Bread and Circuses
Sunday
#1
If AI worked as well as advertised CEOs could EASIILY be replaced quicker than mid to low level engineers. The CEO ...
uponit7771
Sunday
#10