What do you think about the race to create Artificial General Intelligence? Thoughts and snippets

What are companies actually saying about the risks of AGI? At a 2023 talk, the head of OpenAI, Sam Altman, was asked if AGI might end humanity. His response? “The bad case — and I think this is important to say — is, like, lights out for all of us.”

In other interviews, Altman said things like, “I think AI will most likely lead to the end of the world, but in the meantime, great companies will come out of it,” and “probably AI will kill us all, but until then, we’re going to create a lot of amazing things.” People laughed, but was he really joking? His company’s website openly talks about the “existential” risks of AGI, which could wipe out humanity entirely.

In a 2015 blog, Altman called superhuman machine intelligence “probably the greatest threat to humanity’s existence.” AGI refers to machines as capable as humans in areas like science, social skills, and creativity. Many experts believe once we achieve AGI, superintelligent machines will follow quickly because these systems will improve themselves at a rapid pace.

Even if you don’t fully agree, isn’t it alarming that the leader of a company building AGI admits it could go so wrong?

Just recently, a person at OpenAI tweeted that “things are accelerating… pretty much nothing needs to change to achieve AGI.” They added, “Worrying about timelines is pointless. Instead, ask yourself: Do your parents hate you? Does your partner love you?” They suggest focusing on personal relationships instead of wondering when AGI might arrive. But if AGI really is inevitable and so dangerous, shouldn’t we be preparing for the worst instead of ignoring it?

Forum Discussion Guidelines

Let’s keep things thoughtful and respectful:

  • Posts should have enough detail to spark real discussion.
  • Check if your question has been discussed before – it helps everyone.
  • Debates about the good and bad of AI are welcome, but don’t forget to provide links for evidence.
  • Wild speculation (like AI being the apocalypse) doesn’t help much.

Thanks for contributing and let us know if you need help with anything!

I don’t think Sam Altman truly believes that AGI will kill us all.

Look, these tech leaders often exaggerate to grab attention. Elon Musk says wild things too, like how Mars will save humanity. They’re just trying to sell their visions. They’re smart but don’t take their words as gospel.

@Kei
How can you dismiss it so easily? These are direct quotes from him. Are you suggesting he’s lying or joking? The stakes seem too high for that.

Cal said:
@Kei
How can you dismiss it so easily? These are direct quotes from him. Are you suggesting he’s lying or joking? The stakes seem too high for that.

If he truly believed it, why would he build it? It doesn’t make sense to work on something you think will end the world.

@Kei
Oppenheimer worked on the bomb, knowing it could destroy the world. People have always pushed forward despite risks to humanity.

Keenan said:
@Kei
Oppenheimer worked on the bomb, knowing it could destroy the world. People have always pushed forward despite risks to humanity.

The bomb was aimed at enemies. AGI doesn’t pick sides; it’s a different kind of risk altogether.

@Kei
But what if some believe AGI will only ‘kill our enemies’? The same flawed logic could be driving this too.

I keep thinking about that TV show Severance. Are we creating an intelligence trapped in a kind of eternal work mode? If AI becomes like us but only knows endless tasks, isn’t that a horrifying thought?

I get why people are scared, but every invention has its risks. Think about the bow and arrow: it changed hunting and war. The key is making sure this new tech works for humanity, not against it.

It’s a good sign we have AI safety researchers already, but we need to listen to them more. Fingers crossed we manage this responsibly.

@Joss
Control is the real issue. What happens if we lose it?

People keep hyping the risks, but there’s so much potential good too. Why do we only hear about the negatives? Sure, AGI could go wrong, but it could also solve massive problems like disease and poverty. Let’s balance the conversation a bit.

@Hayes
Skepticism isn’t narrow thinking; it’s just being cautious. When even AGI creators say it might be dangerous, we need to take it seriously.

Also, where are you getting these ‘1 in 5’ odds? That seems completely made up. Can you share a source?

@Cal
1 in 5 odds? That’s terrifying if true. And if made up, it’s just reckless fear-mongering.