Machine Earning Models
EU AI Act Risk Master Class: The risk no one wanted to mitigate... Until now!
Background and context
This post is influenced by recent global events and a couple of posts about AI ROI.
The events are Trump’s assassination attempt, the announcement of Trump’s VP pick, J. D. Vance, and the massive worldwide Windows outage.
Jeffrey Funk comments on a Cahn post about companies spending a lot of money on AI, with nobody seems to be making money off it, and the more companies spend on AI, the bigger this gap becomes. David Cahn raises a $600B question, and identifies a big gap between the revenue expectations implied by the AI infrastructure build-out, and actual revenue growth for Nvidia.
This post provides explanations for their observations in the context of these events. In doing so it examines a risk everyone was aware of, but no powerful American politician wanted to mitigate. Until now!
A brief history about EU AI Act risk impact categories
The main reason why no one is talking about this risk, is because we are not comfortable using the C word. This calls for a brief history lesson.
The first version of the EU AI Act classified an AI system as high-risk, if it impacted one of five risk categories, aka SHREC: Safety, Health, Fundamental Rights, Environment and Critical Infrastructure. But somewhere along the way SHREC got truncated and became SHR in the current version of the EU AI Act. Why?
There is a simple explanation for removing E. You can’t talk about the environment, without talking about global warming and the term environment has become synonymous with carbon dioxide emission levels. Leading tech companies are among the largest energy consumers on the planet. They manage the reputational risk associated with this by using risk measures specially designed to mitigate this risk. One of the best measures is ESG, and savvy consultants with LLMs can boost your ESG ratings, regardless of the size or increase in your energy consumption. But people catch on, companies abhor the attention drawn to this topic, and lobbyists managed to remove the offending character from the current version of the EU AI Act.
That brings us to the C word: Critical infrastructure! Why do you think this was removed? Well… if you said that some AI infrastructure was critical, or if there was the risk of AI infrastructure becoming critical, why would the public not be loath to an oligopoly brazenly exploiting this situation!
Are we are heading in this direction at full speed? Or is this a low probability risk because the AI market is stagnant or growing slowly?
Risk assessment and risk impact
The global IT sector is embracing LLMs and AI technology at ever-increasing speed and scale, unleashing a wave of useful downstream applications. When application owners say they are keeping their options open, they mean they are keeping their options open to a select group of elite AI and infrastructure providers. Companies like Nvidia, are at an advantage when it comes to investing in AI companies, because they invest with critical assets in the form of GPUs, instead of old-fashioned investment like dollars and bitcoin.
Let’s not pretend that this is being forced upon us. LLMs make all of us much more productive when we use them well and most of us in the western world can easily afford them. (Full disclosure: I have two LLM subscriptions in addition to the subscription I use at work.) Anyone complaining should remember that this is still primitive technology compared to a future, where this will get much, much better. But what will happen after we are locked in? Will the price sky-rocket? If it did, future earnings would certainly account for the missing billions.
Cast your minds back to what happened early during the Russian-Ukrainian war, when energy prices spiked in the EU. Everyone got shafted, except for a few energy providers who earned exorbitant fees. They had the hutzpah to claim that they didn’t want to do this and it was forced upon them by the “system”. Forgetting that we knew their lobbyists and our beloved politicians and bureaucrats worked hard to create it.
There is a rather simple explanation for why “several companies are spending a lot of money on AI, but nobody seems to be making money off it”, and why “the more companies spend on AI, the bigger this gap becomes.” This money is flowing to oligopoly candidates, and investors expect it to turn into a tidal wave when the oligopoly becomes entrenched.
Which brings us back to the C word, albeit in a slightly different context this time: Critical Commodities! During the recent Windows outage, we got a taste of how dependent and vulnerable we have become on certain critical IT commodities. Now LLMs are fast becoming a critical AI commodity. What happens when we get hooked? That’s when most people will learn about the machine earning model risk.
There is a risk that machine learning models lead to a machine earning model where critical AI infrastructure and critical AI commodities are controlled by an oligopoly in which immense AI-driven wealth is concentrated, and by whom exorbitant AI rents are extracted after consumers are locked in.
Risk response and risk mitigation
Silicon Valley is left-leaning and especially popular with elite Democrats. Republicans care about them too and Silicon Valley has been protected by both parties. Elite tech companies are untouchable, because they drive the stock market and no one wants to kill geese which lay large golden eggs. It’s fair to assume that Trump will promote stock market growth if he becomes president, and Nvidia’s stock price will be one of the beneficiaries. With Harris, who was virtually born in the Valley, they will only benefit more.
Enter Vance!
Oren Cass’s recent interview on Unheard, sheds light on Vance’s position. It differs significantly from Trump’s. They share Cahn’s view that the companies are overpriced. Unlike Cahn they also say these companies stifle competition. They describe Google and Meta as media companies which have monopolized advertising markets - not as innovative and not even as tech companies. They have these companies in their sights and openly state that they should fear them. They intend to introduce anti-trust measures and competition policies to mitigate this risk. The gap Cahn observes will only get bigger if Vance ever becomes president.
Risk monitoring and risk reporting
Google and Facebook monitor their markets and react to posts like Cahn’s. They justify this investment by describing the current situation as an arms race. In keeping with today’s journalistic standards, no one asks them what they’re racing for. Is it premium products or monopolized markets?
In latest developments they make their preference clear as reporting uncovers that search results for the Trump assassination attempt are omitted.
Acknowledgements
Credit for the title goes to Grumpy. I will write more about machine earning models and probably collaborate with him in future. You will know when that happens, because those posts won’t be funny!