What are companies actually saying about the risks of AGI? At a 2023 talk, the head of OpenAI, Sam Altman, was asked if AGI might end humanity. His response? “The bad case — and I think this is important to say — is, like, lights out for all of us.”
In other interviews, Altman said things like, “I think AI will most likely lead to the end of the world, but in the meantime, great companies will come out of it,” and “probably AI will kill us all, but until then, we’re going to create a lot of amazing things.” People laughed, but was he really joking? His company’s website openly talks about the “existential” risks of AGI, which could wipe out humanity entirely.
In a 2015 blog, Altman called superhuman machine intelligence “probably the greatest threat to humanity’s existence.” AGI refers to machines as capable as humans in areas like science, social skills, and creativity. Many experts believe once we achieve AGI, superintelligent machines will follow quickly because these systems will improve themselves at a rapid pace.
Even if you don’t fully agree, isn’t it alarming that the leader of a company building AGI admits it could go so wrong?
Just recently, a person at OpenAI tweeted that “things are accelerating… pretty much nothing needs to change to achieve AGI.” They added, “Worrying about timelines is pointless. Instead, ask yourself: Do your parents hate you? Does your partner love you?” They suggest focusing on personal relationships instead of wondering when AGI might arrive. But if AGI really is inevitable and so dangerous, shouldn’t we be preparing for the worst instead of ignoring it?
I don’t think Sam Altman truly believes that AGI will kill us all.
Look, these tech leaders often exaggerate to grab attention. Elon Musk says wild things too, like how Mars will save humanity. They’re just trying to sell their visions. They’re smart but don’t take their words as gospel.
Cal said: @Kei
How can you dismiss it so easily? These are direct quotes from him. Are you suggesting he’s lying or joking? The stakes seem too high for that.
If he truly believed it, why would he build it? It doesn’t make sense to work on something you think will end the world.
I keep thinking about that TV show Severance. Are we creating an intelligence trapped in a kind of eternal work mode? If AI becomes like us but only knows endless tasks, isn’t that a horrifying thought?
I get why people are scared, but every invention has its risks. Think about the bow and arrow: it changed hunting and war. The key is making sure this new tech works for humanity, not against it.
It’s a good sign we have AI safety researchers already, but we need to listen to them more. Fingers crossed we manage this responsibly.
People keep hyping the risks, but there’s so much potential good too. Why do we only hear about the negatives? Sure, AGI could go wrong, but it could also solve massive problems like disease and poverty. Let’s balance the conversation a bit.