A fork in the road
I believe we are, right now, at a critical point in human history. We’re making incredible progress in unlocking the secrets of intelligence and biology, the two most powerful technologies in existence, and that progress is accelerating. Over the next 10-20 years, I believe we’re going to see the world change in remarkable ways.
The potential positive outcomes are clear. Life extension, eradication of disease, abundance of food, healthcare, accommodation and goods everywhere in the world. Rapid quality of life acceleration and cognitive expansion.
However, I’m concerned, for two primary reasons. The first is that on the path to abundance will be huge disruption, and we’ve not done enough to prepare people for the future of work in the short, medium or long term. We must identify those most at risk and help them make the transition, through awareness, retraining and coaching. Eventually, and before long, “those most at risk” becomes “all of us”, so this is not a “help the needy” initiative - it’s a societal necessity.
The second cause of my concern is the gung-ho nature with which we’re pursuing AGI and superintelligence. Even the leaders of the world’s largest AI companies have publicly pegged the extinction risk associated with these technologies at 20-25%, yet they continue to push ahead with little regard for safety or ethics. You can’t just open a nuclear power station without navigating masses of regulation, but GPT-5.2? Pushed instantly, globally, in a routine update. The problem with an “AI Chernobyl” is that none of us survive it.
I don’t want to halt progress. The cat’s out of the bag and the benefits are real. I do, however, want us to proceed with caution. We must protect ourselves from the dangers of bad actors getting hold of Doomsday-like tools, misalignment and honest mistakes. The stakes are too high. We should absolutely develop more advanced AI, but safe AI has to be the requirement.
Bearing these things in mind, I want to dedicate a portion of Versapia’s time and resources to helping people prepare for the future of work and promoting the development of safe AI.
I don’t yet know what this means in practice. Maybe there’s a podcast or a book. Maybe we do speaking tours, deliver training or set up mentoring programmes. Maybe it’s lobbying, PR or using insights from our day-to-day work to better predict where work is heading. What I do know, though, is we need to act to mitigate the risks ahead of us.
There are two things I’m asking of you. First, share this with anyone in your network who you think needs to hear it - especially those in government, technology and HR. Second, let me know who I should speak to. I want to learn as much as I can about these issues, so I can start to think about how we can approach them.
Maybe, just maybe, if enough of the right people start thinking and talking about the risks of the future in a practical way, we can usher in the age of abundance we’ve all been promised.