I don’t think LLMs can ever lead to AGI. They’re amazing tools, but they’re not the right foundation for true general intelligence.
You bring up great points about gaps in humor and scientific reasoning. These areas highlight how hard it is to replicate human thinking. What do you think needs to change to make AI better at these things?
I think GPT-4 is already an AGI in some ways. It can reason across a wide range of topics.
That said, to make AI more human-like, it would need to keep learning all the time, not just during training. Also, it should have the ability to think without needing prompts—it needs a kind of “always on” state.
Finally, spatial reasoning is a big gap. AI should be able to model the physical world in its “mind,” like we do.
It’s hard to measure how close we are to AGI or how much effort it will take. It could be a small tweak, or we might need entirely new breakthroughs to get there.
Bridging these gaps will require new ways of teaching AI to learn and generalize better.