Yes, machines learn. No, it doesn’t matter much
The trajectory looks inevitable.
The trajectory looks inevitable.
Elon Musk, Nick Bostrom, & even OpenAI’s own founders have made careers out of warning about “superintelligence” systems far beyond human capability. The real anxiety isn’t raw power but alignment: whether such systems would continue to reflect human goals, or pursue their own. For now, the AI we have is still narrow. GPTs, autonomous agents, & other tools are bounded & dependent on human involvement. They don’t rewrite themselves at scale or self-replicate. They’re powerful, but they’re not the thing people like to fear. The fear I'm pointing to — AI exceeding limits, escaping control — belongs to a "hypothetical" AGI or ASI. That doesn’t exist yet. And yet the concern is real, because history suggests that when capability arrives, control tends to lag. The trajectory looks inevitable, the warnings predictable, & the outcome more a question of timing than of principle.