The AI bubble: Is Nvidia artificially engineering chip demand?
- hamishmonk1
- Nov 11, 2025
- 6 min read

Some of the UK’s biggest financial institutions (FIs) are investing in artificial intelligence (AI). In November this year, Lloyds unveiled the UK’s first multi-feature AI-powered financial assistant; in September, HSBC launched its generative AI-powered platform; and in March, NatWest became the first UK bank to partner with OpenAI.
Appetite among institutions for deploying AI has been just as fierce across the pond, with the likes of JPMorgan, Bank of America, and Goldman Sachs – among others –jumping onto the proverbial wagon. Their hope is that AI will modernise the customer experience through personalisation, while at the same time raze inefficiencies in the back office.
However, it is not yet clear how effective AI will prove in the long-term, on either count. An undercurrent of concern is swirling among investors as to whether the technology can deliver on its promises of heightened productivity, product innovation, and new revenue streams. The AI promise-delivery gap, in other words, is widening.
The promise-delivery gap
Despite being the breeding ground of some of the world’s most prolific technology firms, the US has seen almost zero economic productivity growth from AI, The Economist has claimed. And, notwithstanding a global $7 trillion race to scale data centers, Gartner has predicted that over 40% of agentic AI projects (an increasingly popular use case) will be abandoned by end-2027.
With valuations soaring and “AI juggernauts like Nvidia and Palantir…driving a tech-bloated S&P 500,” Forbes characterises this under-performance as a bubble. So outweighed are profits by hype, in fact, that The Guardian’s senior economics writer, Philip Inman, questions whether the AI bubble is “about to burst and send the stock market into freefall.” In October, the Bank of England weighed in to similar effect, advising UK investors to prepare for the possibility of a sharp correction in the value of UK stocks.
A web of dependency
In order to understand why the world of AI has become so top heavy, we must unpick how its supply chain operates. At the centre of the web is Nvidia, the American technology firm that designs and manufactures the graphics processing units (GPUs) and chips needed to power AI models. The most valuable company in the world, Nvidia is worth over $4.5 trillion. Other, smaller chip manufacturers include Intel and AMD.
Buyers of Nvidia’s chips are either the AI providers themselves, or the data centres – such as those built by Oracle, Nscale, or Coreweave – which lease out IT infrastructure and computing power. OpenAI, the American AI research and deployment company, for its part, buys chips directly from Nvidia to fuel its large language models (LLMs), like ChatGPT; text-to-video models, like Sora; and image generators, such as DALL.E.
With the demand for Nvidia’s chips far outstripping supply, the price continues to rise – as does the stock value. This relentless climb is causing some analysts to reject assertions that a bubble is developing and argue that the valuations are justified.
However, it must be recognised that Nvidia, among others, has been accused of working to artificially engineer demand, by investing in firms like Mistral, Figure AI, and xAI – which, in turn, redeploy some of the venture capital to buy Nvidia’s chips. Through a recent £2 billion investment into Revolut, Nscale, PolyAI, Synthesia, Latent Labs, and Basecamp Research, Nvidia is extending its AI empire into the UK – and raising concerns of European data sovereignty.
OpenAI adopts a similar strategy, by investing in firms like Ambience Healthcare, Harvey AI, and Anysphere – all of which, in turn, purchase services from OpenAI. Of course, the inflows and outflows are not exactly equal, but the system is undoubtedly circular. Despite being valued at $500 billion, OpenAI’s revenue stands at $12 billion.
The AI industry’s web of dependency was laid bare in September when Nvidia agreed to invest up to $100 billion in OpenAI – a deal predicated on the supply of chips. In the same month, and despite being worth only $12 billion, OpenAI signed a $300 billion cloud service contract, one of the largest on record, with Oracle. In order to build out the necessary infrastructure, Oracle tapped up Nvidia to buy more chips. By indirectly subsidising the purchase of their own chips, Nvidia is puffing up demand.
Through this feedback loop, as tens and hundreds of billions flow around the tech sector, the stock market is rewarding any company they touch with a value hike. Indeed, when OpenAI signed the service contract with Oracle, its stock price shot up. All the while, no one can say how AI should be plugged into businesses in the most efficient manner – let alone drive productivity across complex banking infrastructures.
It must be remembered that Nvidia’s chips are not the product. The AI services they enable are. So, in a market where the cost of the hardware – not to mention the water and energy bills – massively outstrips the product’s returns, how can the model stay afloat? Analysis shows that OpenAI loses money every time its product is used. Even its $200-per month service tier is not preventing the company from burning cash at breakneck speed. Speaking at a dinner in San Francisco in August 2025, the CEO of OpenAI himself, Sam Altman, opined: “Someone is going to lose a phenomenal amount of money… When bubbles happen, smart people get overexcited about a kernel of truth.”
There is, however, one major player that is not caught up in this web of dependency: Microsoft, which is buying Nvidia chips but not (directly, at least) receiving investment. Microsoft does this via a comparatively small AI infrastructure firm based in Amsterdam, Nebius, which it pays to power its AI services. Nebius is in receipt of investment from Nvidia, and recently purchased 100,000 chips.
Ultimately, the continued supply of AI chips is contingent on Nvidia’s access to silicon wafers, as produced by the Taiwan Semiconductor Manufacturing Company. This business line will hold as long as the United States can protect Taiwan from China’s influence.
The dotcom bubble
If any of AI’s market mechanics sound similar to the infamous dotcom bubble – and its subsequent crash – of the late 1990s, that’s because they are. Like today’s AI bubble, the dotcom equivalent was generated by a period of speculative mania that sent the stocks of US internet-based technology firms sky-high. While the equity markets bloated exponentially, the speculation leaned solely on the promise of profitability, rather than actual earnings. At the turn of the millennium, the market inevitably imploded and countless dotcom stocks went bankrupt, while numerous high-profile tech companies waved goodbye to over 80% of their market value. So severe was this correction that the Nasdaq took 15 years to reclaim its previous high.
Whether AI’s stock prices crash soon or later, the technology itself is not going away. Though the dotcom bubble severely dented confidence, the technology and business models behind it are thriving. In fact, thanks to increased smartphone and internet penetration, global ecommerce in 2023 was valued at $20 trillion – a figure that will swell to around $100 trillion by 2032, according to Statista.
Perhaps a similar trajectory will be taken by AI; its stocks imploding but its use cases staying the course. This is the hope of the financial services industry, at least, which continues to hammer out eye-wateringly expensive AI apps.
To boom or to bust: How financial services should respond
Considering the prognosis, responsible AI development – supported by fit-for-purpose regulation – will be critical; focusing on the transparency and explainability of models’ decisioning processes, with constant human oversight. Precise use cases are also critical, meaning FIs should identify high-quality solutions to specific operational or customer challenges, as opposed to simply diving headfirst into the AI goldrush. In general, investments should start small, with targeted proof-of-concepts – as opposed to big-bang migrations – which gradually broaden across the organisation.
If the AI promise-delivery gap proves anything, it’s that human intelligence is exceedingly hard to simulate. That’s cause for (momentary) celebration. As FIs await the fate of the bubble on which they sit, they must use this moment to ensure their deployments deliver returns and – most importantly – put customer experience and security front and centre.



Comments