What impact will AI have?
I hope you read this post, it was really fun to write. I spent way too much time on the cons, so will try and come up with some pros in the coming week.
AI seems like a term that is constantly being tossed about. So much so that there usually is a scoff from an audience as soon as someone brings up the "what do you think about the impact that AI is going to have" questions at events.
Its a huge topic that I must admit I know fairly little about. From what I do currently know, there are two main arguments. The first argument for AI being its benefits in terms of productivity. Since the 2008 GFC, UK productivity growth has been very weak- averaging about 0.5% per year (well below the around 2% annual growth seen before the crisis); most of you, I'm sure, will have heard this as "the productivity puzzle". Could AI be the savior to our stagnant productivity by reducing the amount of time required to retrieve information and replace monotonous tasks? Or is it the other side of the argument that many fear will prevail? Will AI lead to the "creative destruction of workers" as mentioned by one of the 2025 Nobel Prize for Economics winners (either one of Mokyr, Aghion, and Howitt). There is another really interesting argument I would also like to explore in this blog, stemming from a person on tik tok that he might have been too "woke" in talking about the disruptions AI has had on the environment.
Okay its 630 AM, lets see how much I can get done before the mailing list deadline (bear in mind I also need to proof read the mailing list and add a couple of points).
Exploration 1: What actually is AI?
This short 2 min long video explains how AI systems generally connote machines that normally would require human intelligence using data sets provided to them. There are three main types:
- Narrow AI- designed to perform one singular task - bit like embedded systems in AS CS last year such as fraud detection
- General purpose- Large data sets to teach a system. Foundation model to build other
- AGI- artificial general intelligence- being able to perform general tasks that would monotonously be performed by humans
So in simple terms: AI = machines that can learn, reason, and make decisions.
What stood out for me was that most modern AI relies on three things:
-
Machine learning - systems that learn patterns from data instead of being explicitly programmed. Already, I'm thinking structural unemployment in the programming sector? Data is being studied and uses this to make judgements.
-
Neural networks - models inspired by the brain, especially used in deep learning
-
Large datasets + computing power - essential for training modern AI
Exploration 2: What are the benefits of AI?
- The AI market cap is worth around £400bn (almost double what it was in 2023)
- 77% of devices uses some sort of AI embedded systems
- And 90% of all top performing companies worldwide have some sort of "AI strategy", so there must be something to it
- In industries most exposed to AI, productivity growth has nearly quadrupled, jumping from around 7% before 2022 to about 27% between 2018 and 2024 (according to one of the big 4 companies, PwC).
- These sectors also saw 3 time greater higher revenue per employee compared to less AI-exposed industries. Workers with AI skills are commanding a 56 % wage premium on average, reflecting rising demand for tech-augmented skills. So does the unemployment argument really hold? I mean could this not just be a similar period to when, lets say, Burner's Lee invented the computer?
Exploration 3: What are the costs of AI?
First of all, I was extremely shocked to see a screenshot of the bias in data. The political sprectrum of AI systems like GPT was really scary to look at.
My vandetta against AI systems is that can be very convincing; even when they’re wrong. AI models are so known to hallucinate facts. Errors can scale instantly across organisations. I mean with homework its not that deep but then in high-stakes settings (healthcare, finance, law), a single flawed model can cause widespread harm. The risk isn’t just technical failure- it’s over-trust. We Humans may defer judgement to AI because it appears authoritative, reducing oversight exactly when it’s most needed.
I feel like chatbots like GPT doe’t just change what we do but also defo change how we think. Constant AI assistance weaken memory, problem-solving, and critical thinking. Many may rely on AI rather than learning foundational skills. So Workers may lose confidence in their own judgement. Short-term efficiency gains may come at the cost of long-term cognitive resilience. It is a trade-off that is rarely measured, as it is so hard to do so too.
This video highlights the problems of AI on intelligence: https://www.youtube.com/watch?v=snp4O75G0z8
If every email written, photo uploaded, review posted, or document created feeds data pipelines. Yet: Individuals receive no direct compensation for the data used to train commercial models. A small number of firms capture the vast majority of the value created. This raises the idea of “digital labour”- are people unknowingly working for AI companies? If data is an economic input like labour or capital, should it be taxed, regulated, or compensated?
Power. AI is no longer just an economic tool - it’s a geopolitical asset. The US and China together account for over 70% of global AI investment. This is like the arms race, but who has got the better AI system. China aims to become the global AI leader by 2030, embedding AI into defence, surveillance, and industry. The UK accounts for less than 3% of global AI compute capacity, raising concerns about long-term dependence on foreign technology. This matters because countries that control AI infrastructure, data, and chips will shape global standards, supply chains, and even security. The UK risks becoming a consumer rather than a creator of AI, limiting economic sovereignty and bargaining power.
High levels of structural unemployment is something im curious to explore:
AI could shift unemployment from a short-term friction to structural, long-term joblessness. The fear isn’t just about temporary layoffs while people reskill. What the 2025 nobel prize winners were tryna say is that AI isn’t like past automation that mainly cut repetitive physical work. I mentioned how generative AI threatens non-routine cognitive jobs too, roles once thought safe. In fact, research from the IMD predicts that up to 30 % of jobs in advanced economies could be at risk of being replaced by AI in future downturns if adoption accelerates, and that this is a larger exposed labour pool than in past cycles.
Meanwhile, Goldman Sachs Research, which is also one of the most closely watched economic teams on Wall Street fr, estimates that AI adoption could displace between roughly 3 % and 14 % of jobs depending on how quickly firms deploy it, with a mid-range scenario of around 6–7 % displacement across the US workforce. Even if most of this is temporary, that’s a shifted labour market, not just a blip.
And some of the impacts are already visible, Ive mentioned most weeks on the mailing list. Entry-level job adverts in the UK, the gateway to careers, fell by about one-third between late 2022 and mid-2025, showing fewer opportunities for young workers to get a foot in the door. The Urban Herald In tech and AI-exposed sectors, payroll growth slowed sharply after 2022, not picked back up even as the economy recovered in other areas.
What’s worrying about structural unemployment is that it’s not cyclical like during a typical recession. If people lose jobs because machines permanently replace a task they used to do, you get a mismatch between available roles and available skills. As the IMF warned, this could deepen unemployment during downturns rather than let labour markets bounce back quickly. IMF That matters because sustained unemployment reduces consumer spending, and consumer spending is roughly two-thirds of GDP in many advanced economies. If workers without jobs stop spending, companies won’t expand, tax revenues fall, welfare costs rise, and you get a vicious cycle that feeds economic traps, not growth.
AI bubbles potentially creating the next financial crisis?
The narrative that AI will solve everything( productivity, wages, innovation) has driven record-high valuations in tech stocks. The recent warnings from global institutions are striking.
Many Economists point to classic bubble symptoms: Rapid, hype-driven price rises Tech stocks forming a large share of major equity indices. Soaring valuations even for companies with minimal profits Silent investor consensus that “AI will fix futures” even if short-term returns aren’t matched
For example, OpenAI itself- arguably the poster child of the current AI boom- has never reported a profit, yet at one point its valuation was cited as around $500 bn, dwarfing its earnings and reminding many of the dot-com highs. The OECD also flagged that the current optimism around AI investment could be a downside risk to economic growth if markets correct sharply. Axios But here’s the subtle nuance economists keep stressing: a bubble doesn’t need to implode to hurt the wider economy. If AI investment cools sharply and tech stock prices correct, investor confidence could slip, credit conditions tighten, firms postpone hiring or capex, and growth slows.
Comments
Post a Comment