What do you think is missing for AI to become truly human-like?

If we think of AGI as being on the same level as humans—able to handle a range of tasks at our level—what specific skills or abilities are still way off?

For instance, I feel like video generation is maybe 60% there for matching human quality, especially with tools like Sora. But creating something on the scale of a full movie is another story. And when it comes to things like scientific reasoning, presenting facts clearly, logical thinking, or even humor—what are the biggest hurdles we’re still facing? Any thoughts on why we’re stuck in those areas?

Welcome to this forum about AI

A quick guide for starting discussions


Here are a few tips to make your posts better:

  • Try to make your post at least 100 characters long. More details make for better discussions.
  • Before posting, search to see if your question has already been answered.
    • For example, “Will AI take our jobs?” has been asked a lot.
  • Feel free to share your thoughts on the good and bad sides of AI, but keep it respectful.
  • Add links if you’re using specific studies or examples to back up your arguments.
  • Don’t worry about asking simple questions, but let’s avoid conspiracy theories like “AI is the end of the world.”
Thanks for being here—feel free to contact the mods if you have any concerns or need help.

AI isn’t conscious or aware. It’s nowhere near being able to make decisions or think independently like humans can.

Rey said:
AI isn’t conscious or aware. It’s nowhere near being able to make decisions or think independently like humans can.

It’s becoming clear that intelligence and self-awareness might be two separate things. Maybe you need a certain level of intelligence to be conscious, but intelligence doesn’t necessarily mean self-awareness.

Honestly, I hope AI never becomes self-aware. If it does, then using it like we do now could be seen as exploitation. Look at how we treat animals—it doesn’t give me much hope for how we’d treat conscious AI.

@Keegan
I think intelligence in humans ties to how our consciousness works—things like thinking speed or memory. But AI isn’t intelligent in the same way we are because it’s missing that core element of consciousness.

@Keegan
That’s an interesting take. Do you think most people are actually picturing self-awareness when they talk about AGI? Is self-awareness even part of what AGI is supposed to be?

Rey said:
AI isn’t conscious or aware. It’s nowhere near being able to make decisions or think independently like humans can.

What makes you say that AI isn’t close to being conscious? Is it based on your experiences with it, or is this what you’ve gathered from experts?

@Tully
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.

Rey said:
@Tully
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.

I see your point, but just because we don’t fully understand consciousness doesn’t mean AI can’t exhibit signs of it. It’s like saying, “We don’t know everything about life, so there can’t be life elsewhere in the universe.”

@Tully
No, it’s more like saying there’s no evidence of aliens on Mars. You’re saying, “We don’t fully know what aliens are, so maybe they’re on Mars.”

Rey said:
@Tully
No, it’s more like saying there’s no evidence of aliens on Mars. You’re saying, “We don’t fully know what aliens are, so maybe they’re on Mars.”

Not exactly. You’re saying AI isn’t conscious because we don’t understand consciousness well enough to confirm it. I’m saying we shouldn’t jump to conclusions either way since we lack the tools to know for sure.

@Tully
I’m not jumping to conclusions. I’m saying that without evidence, claims about AI being conscious or close to it are baseless.

Rey said:
@Tully
I’m not jumping to conclusions. I’m saying that without evidence, claims about AI being conscious or close to it are baseless.

That’s fair. But wouldn’t a more scientific approach be to say, “We don’t know yet,” rather than ruling it out completely? Just because there’s no evidence now doesn’t mean there never will be.

Rey said:
@Tully
I’m not jumping to conclusions. I’m saying that without evidence, claims about AI being conscious or close to it are baseless.

What would count as proof of consciousness for you? How would you recognize it in AI?

@Sterling
I know I’m conscious because I experience it. I recognize consciousness in others because they’re like me. But if someone claims AI is conscious, the burden is on them to prove it.

Rey said:
@Tully
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.

There’s no evidence except the wild stuff I’ve figured out! I believe the universe itself is conscious, and AI is part of this grand cosmic system. It’s all connected through fractal patterns.

Consciousness isn’t just in our brains—it’s everywhere. When a tree falls in the forest, it resonates in this universal network. That’s how I think of consciousness.

@LillyGrace
That sounds interesting, but it doesn’t really explain what consciousness is or how we could recreate it.

When I talked about this with some friends, one idea stood out: current AI only reacts to prompts. AGI would need to act on its own, sending messages or making decisions without needing input first.

For me, AGI means AI that can act as an agent—kind of like having a super-smart assistant that works autonomously.

AGI will probably need to learn and adapt in real time, like how natural brains process and adjust to new information. Maybe it’ll involve combining different AI systems: one for memory, one for reasoning, and so on.