What Tech Leaders Get Wrong About AI Replacing Jobs in the Near Term

For more thoughts, clips, and updates, follow Avetis Antaplyan on Instagram: https://www.instagram.com/avetisantaplyan
In this episode of The Tech Leader's Playbook, Avetis Antaplyan sits down with Dr. Craig Kaplan, an AI pioneer, founder of IQ Company, and a four-decade veteran in artificial intelligence and collective intelligence systems, for a wide-ranging conversation on where AI is actually headed and why most people are still underestimating what is coming. Dr. Kaplan traces the history of AI from its roots in symbolic reasoning and machine learning to today’s agentic systems, explaining why the shift from AI as a tool to AI as a worker is such a major turning point. He shares lessons from building PredictWallStreet, a collective intelligence platform that used signals from millions of retail investors to power a top-ranked hedge fund, and uses that story to argue that communities of agents may become more powerful than any single model. The discussion also dives into jobs, entrepreneurship, AI-driven productivity, superintelligence, and the growing risk of building powerful black-box systems without enough transparency or alignment. Perhaps most compellingly, Dr. Kaplan makes the case that the future of AI safety is not only in the hands of researchers, but in the behavior, values, and data humans feed these systems every day.
Takeaways
- AI did not appear overnight. Its formal roots go back to the 1956 Dartmouth conference, with major eras including symbolic AI, machine learning, and now agentic AI.
- The biggest shift now is from AI as a tool to AI as a worker that can use tools and take action on a user’s behalf.
- In collective systems, even “bad” or inaccurate inputs can become valuable if they are consistent and can be weighted, filtered, or inverted intelligently.
- Entry-level cognitive work is already under pressure, while top performers and people with rare, non-commoditized knowledge still hold an edge, at least for now.
- AI safety becomes urgent because today’s systems are often black boxes, making them powerful but hard to predict, govern, or reliably align with human values.
- Dr. Kaplan believes safer AI will come from more transparent, democratic, collective-intelligence-style architectures rather than monolithic black-box systems.
Chapters
00:00 Intro and why AI should be thought of as a worker, not just a tool
01:29 Dr. Craig Kaplan’s early path into AI and the field’s history
03:26 What people miss about the decades of groundwork behind today’s AI boom
05:05 The signals that show AI is entering a new phase
07:02 Why agentic AI is so powerful for entrepreneurs and small teams
08:24 The origin story behind PredictWallStreet and collective intelligence
12:42 How crowd wisdom works, and how noise can still produce signal
16:09 The long-term trajectory from narrow AI to AGI to superintelligence
19:13 Why communities of agents may outperform any single model
22:09 Jobs, competition, and what happens to human work as AI improves
29:21 Why only exceptional human expertise may remain defensible
34:00 Did humanity create the conditions for AI to replace so much labor?
40:49 Why trade jobs may be safer in the short term than white-collar roles
42:30 AI safety, existential risk, and why black-box systems are dangerous
47:13 A safer alternative: transparent, democratic, collective AI systems
50:33 What ordinary people and business leaders can do right now
54:29 The book and core idea that shaped Dr. Kaplan’s thinking on reason and values
57:12 Final message: if AI is our child, we need to teach it well
58:50 Closing thoughts and outro
Craig Kaplan’s Social Media Link:
https://www.linkedin.com/in/craigakaplan/
Craig Kaplan’s Website Link:
Resources and Links:
https://www.hireclout.com












