Over the long weekend, I read Sam Altman’s recent article titled “The Intelligence Age.” To set the scene, Mr. Altman is the CEO and founder of OpenAI, the company behind ChatGPT—arguably the most well-known Artificial Intelligence service in the world, and the record holder for the fastest proliferating technology in human history, achieving 100 million users in just two months.
Initially, I had no intention of writing about the piece, but two quotes kept disturbing me: his references to AI driving “the next leap in prosperity” and “shared prosperity.”
For context, I am a GenAI early adopter, and I’ve started training business people on how to use, adopt, and benefit from AI in a simple, cost-effective manner. While also planting the seeds of insight to encourage longer-term strategic thinking in small to medium-sized businesses. I am not an “AI is hype” proponent. I see great value in it and regularly use ChatGPT, among other AI tools.
The motivation behind setting up AdvisorAi was a genuine concern that most business people—be they owners, directors, managers, or staff—are ill-prepared for the seismic changes rapidly approaching. My goal is to help ensure Australia’s thriving small to medium-sized business community survives and thrives. I don’t want to see mega-corporations with huge AI budgets pushing out local businesses, oligopolies that choose AI over labour, delivering most of our products and services.
In essence, while I see huge value in AI now (Google's AlphaFold and AI-based weather prediction systems are great examples) and in the future, and I agree with Sam Altman’s point that “deep learning worked” and will continue to improve at an incredible rate, I still question the prevailing narratives—especially around “shared prosperity.”
Conflicts of interest abound. Big tech companies need to walk a tightrope between highlighting the upside and raising capital and not upsetting the “horses”, namely us the public and the regulators. It seems many journalists and politicians are not asking the hard questions with the urgency the situation demands.
We have numerous elephants in the room, from the combination of increasingly intelligent AI with mobile phones and social media affecting younger generations (Jonathan Haidt probably hasn’t slept in days), to AI becoming the source of truth / the educators of our children and communities. Then there are two of the biggest issues: AI’s potential role in growing underemployment and unemployment.
It’s likely AGI will be achieved in the coming years, and AI is already being deployed within corporations. Even if Artificial General Intelligence (AGI) isn’t achieved as quickly as some predict, it doesn’t matter. A Sydney businessperson who loses their company to an AI-enabled competitor won’t care if the AI is at AGI level or not. The shortening timelines, the personal and societal impacts are what matters, not the technical definitions.
The concept of “shared prosperity” seems highly unlikely, especially in the early phases of AI transition. Many people’s working hours will be reduced, or they will lose their jobs. Hypothetically, if a team consists of 15 call centre operators, and 10 operators with AI skills can do the work of 15, will the team grow, stay the same, or shrink?
Currently, OpenAI is reportedly going to lose over $4 billion this year, so they won’t be able to compensate the unemployed or underemployed. They are also moving away from their not-for-profit origins toward a for-profit structure.
OpenAI recently raised over $6 billion from investors. Do you think those investors want a high return on their investment, or will they be happy to share profits with those adversely affected by AI? And do you think U.S. lawmakers—especially proponents of small government—will quickly adopt Universal Basic Income (UBI) measures to assist those impacted? (UBI is a topic for another day, but it highlights the broader societal challenges that need to be addressed).
Additionally, will shareholders want to see the price of OpenAI’s services driven down to near-zero? They’ll likely push for profitability, meaning AI will be sold to businesses with a simple pitch: “buy this technology and save on labour costs.”
These supply-side questions are rarely asked in the media (if they are it's usually superficially, in passing), and it’s even rarer that demand-side considerations are discussed. What will happen to consumer sentiment and spending in the face of rising unemployment? Who will consume the extra productivity?
Microsoft and Nvidia are highly profitable, earning billions every quarter. Do you think they will soon share their profits with the unemployed in Sydney, the homeless in California, or even those in the developing world?
The benefits of AI will not be shared evenly—if at all. The rate of AI evolution is astonishing and will transform the foundations of business and employment within the next two election cycles in the US, within three in Australia, if not earlier.
Will AI benefit or hinder humanity? It will do both, depending on who you are, where you are, and how prepared you are.
I hope I’m wrong, and our governments have societal continuity plans in place, with defined triggers (e.g. if unemployment hits X or underemployment hits Y) to minimise human harm during the transition. Unfortunately, they don’t, but it should be a top priority (it is however good to see Australia directing some resources in this direction). We need think tanks of economists, intellectuals, philosophers, psychologists, teachers and citizen representatives working on this issue as an important priority.
This is not science fiction. If big tech does what they say they will do, and invest the circa trillion dollars received to date to achieve those goals, significant business transformation is only a few years away.
I am an AI proponent and see its vast potential now and in the future, from curing diseases to alleviating suffering, to assisting us address global warming. However, thinking the benefits will be shared and that the disruption is a long way off doesn’t help.
We need to collectively address AI’s challenges and optimise its opportunities with a real sense of urgency, aiming to optimise and share its upside as much as possible, as ever-increasing inequality benefits no one.
---
If you wish to discuss AI’s upside, educate your teams, or explore the opportunities and challenges ahead, feel free to reach out: michael@advisorai.com.au
PS: you can read Sam Altman's original article on his personal website, ia.samaltman.com