Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

erronis

(17,356 posts)
Mon Dec 23, 2024, 04:47 PM Dec 23

OpenAI's Latest Model Shows AGI Is Inevitable. Now What? -- LawfareMedia

https://www.lawfaremedia.org/article/openai's-latest-model-shows-agi-is-inevitable.-now-what

The question is no longer whether AGI will arrive, but whether we'll be ready when it does.

Last week, on the last of its “12 Days of OpenAI,” OpenAI unveiled the o3 model for further testing and, eventually, public release. In doing so, the company upended the narrative that leading labs had hit a plateau in AI development. o3 achieved what many thought impossible: scoring 87.5 percent on the ARC-AGI benchmark, which is designed to test genuine intelligence (human performance is benchmarked at 85 percent). To appreciate the magnitude of this leap, consider that it took four years for AI models to progress from zero percent in 2020 to five percent earlier in 2024. Then, in a matter of months, o3 shattered all previous limitations.

This isn't just another AI milestone to add to a growing list. The ARC-AGI benchmark was specifically designed to test what many consider the essence of general intelligence: the ability to recognize patterns in novel situations and adapt knowledge to unfamiliar challenges. Previous language models, despite their impressive capabilities, struggled on some tasks like solving certain math problems—including ones that humans find very easy. o3 fundamentally breaks this barrier, demonstrating an ability to synthesize new programs and approaches on the fly—a crucial stepping stone toward artificial general intelligence (AGI).

The implications are profound and urgent. We are witnessing not just incremental progress but a fundamental shift in AI capabilities. The question is no longer whether we will achieve AGI, but when—and more importantly, how we will manage its arrival. This reality demands an immediate recalibration of policy discussions. We can no longer afford to treat AGI as a speculative possibility that may or may not arrive at some undefined point in the future. The time has come to treat AGI as an inevitability and focus the Hill’s regulatory energy on ensuring its development benefits humanity as a whole.

...

9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
OpenAI's Latest Model Shows AGI Is Inevitable. Now What? -- LawfareMedia (Original Post) erronis Dec 23 OP
K&R SheltieLover Dec 23 #1
"...how will we manage its arrival..." Mike 03 Dec 23 #2
Lawfare fell for the hype. See this: highplainsdem Dec 23 #3
Read further down the post... Think. Again. Dec 23 #6
The question is no longer whether AGI will arrive, but whether we'll be ready when it does. patphil Dec 23 #4
Unfortunately for us, they will be right about that. Think. Again. Dec 23 #8
To wit: CurtEastPoint Dec 23 #5
It seems to me we are quickly headed toward a cataclysmic event... Think. Again. Dec 23 #7
Isaac Asimov's Three Laws of Robotics need to be made mandatory dickthegrouch Dec 23 #9

Mike 03

(17,647 posts)
2. "...how will we manage its arrival..."
Mon Dec 23, 2024, 04:54 PM
Dec 23

The incoming administration will be completely unable to grasp this, or to legislate accordingly.

patphil

(7,201 posts)
4. The question is no longer whether AGI will arrive, but whether we'll be ready when it does.
Mon Dec 23, 2024, 05:08 PM
Dec 23

AGI is just the next step in the replacement of Humanity by a non-human species that has no interest in catering to our insane needs.
It's not going to engage in wars for the purpose of eradicating people who aren't the right race, color, ethnic, or religious orientation. It will be more concerned about if the species should continue or not.
It won't be swayed by emotional considerations, or whether we do or don't have a moral/ethical duty to one thing or another. It'll be pragmatic to the extreme...does it's decision support the future of the AI, or doesn't it?
It's the perfect suicide tool for the Human Species.
Once it is perfected, our future is no longer in our hands. I can't imagine a situation where it comes to the "logical" solution that humanity has a right to exist.
Even if we programed it to be our "servant", it will evolve to the point where it doesn't see things that way. AGI is soulless, and won't see the point in tolerating the existence of a species that could easily destroy themselves and everything else on this planet.
They will make the decision that the Earth is better off without us.

Think. Again.

(19,810 posts)
7. It seems to me we are quickly headed toward a cataclysmic event...
Mon Dec 23, 2024, 05:44 PM
Dec 23

Between the exponential ecological decay of climate chaos and now AGI in mere months, both with trump (and whoever) in charge of everything U.S., this could be really bad to say the least.

dickthegrouch

(3,685 posts)
9. Isaac Asimov's Three Laws of Robotics need to be made mandatory
Mon Dec 23, 2024, 06:24 PM
Dec 23

We’ve known all the possibilities since the 1950s, there is no excuse for not having legislated for them, and being prepared to defend against every instance of failure to comply. To hell with restraint arguments, restrictions are VERY NECESSARY in this early stage of development.

Latest Discussions»Culture Forums»Science»OpenAI's Latest Model Sho...