Random ramblings on AI

You didn’t ask for it but you get it nevertheless: Some random thoughts on various aspects of Artificial Intelligence. Spoiler: No actionable insights (I think).

Gemini 3.0. vs. Nvidia

Google Gemini 3.0 seems to be a really good model. I am currently using it with my prompts and it seems a little bit better but not that much.  NotebookLM seems to have improved a lot.

However, according to various sources, the model was trained and runs exclusively on Google TPU chips. The Nvidia Bulls keep saying that Nvidia has such a large advantage including their software, that those ultrafat margins will persist for many years as there is no alternative. I am not so sure about this. 

This is the EBIT margin development of NVIDIA since 2002:

The 5 trillion Dollar question here clearly is: What is a sustainable profit margin for Nvidia provided that they are at some point in time one of several competitors in the space ?

XAI/Grok/Elon

Another AI story that went around is the obvious “special training” that XAi’s model Grok has seem to have undergone with the result that Elon Musk was pictured as the most superior human on Earth in every aspect and dimension.

While funny in itself it clearly shows the potential for manipulation within those models. The Elon/Grok example was easy to spot, but there might be much more subtle ways to do this. 

In the age of broken international relationships, one might wonder if it is really a good idea to rely on American or Chinese models that will soon run a lot of our economy or if, from a European stand point, it would be REALLY REALLY important to get our own models.

To me this episode is another proof that although Elon Musk has achieved a lot of great things, his name is very counterproductive for any mass consumer product. Which person or company who is not really an Elon fan wants to use Grok or even have an Elon robot in his house ? For pumping his stock, he only needs to convince a few people. However for producing a mass market product, that is much harder.

Accounting Shenanigans & Data Centers as Infrastructure

Much has been written about circular deals, off balance sheet funding, depreciation schedules of CGUs etc. One thing is clear: Without “financial alchemy”, even for the Big Cash machines like Meta, Microsoft and Co. this amount of Capex is hard to stomach.

Another aspect of this is the vast popularity of AI Datacenters as “Infrastructure” investments. Normally, infrastructure is defined as something very durable that has to be used (i.e. a port, toll road etc.). 

With AI data centers, in my opinion, durability must be challenged. More than 50% of data center Capex these days is computing hardware. Even if we assume 6 years of “useful life”, this is clearly not even close to infrastructure.

To add to these issues, Data Centers are often built with “off grid” power plants that are mostly financed by Infrastructure funds, too. If those data centers get into trouble, the same trouble hits those off grid power plants, too. When Data Canter Capex is reported, the associated power sources are never included. So the cumulative exposure of the Infrastructure investing industry to data centers is even higher.

What I also find interesting is that in the Dwarkesh Podcast with Satya Nadella, Nadella said that basically you need to change everything else (i.e. cooling etc.) too if you want to upgrade to a new generation of CGU chips.

Interestingly, the CEO of one of the most respected Infrastructure Investors, I Squared said publicly that he “politely declines” because among others, he worries that all those commitments from OpenAI &Co might not be enforceable.

Start-up valuations

Public markets are clearly very expensive, however, valuation in Start-up land are absolutely insane when it comes to AI. 

Mira Murati, the” one day CEO” of OpenAI, is rumored to raise at a 50 billion valuation after raising 2 bn USD at a 10 bn valuation in it’s initial seed funding round just 4 months ago.

This is extremely remarkable in two ways: First, they seem to spend their money really fast, and secondly, a 50 bn valuation after only 4 months is a lot. The product of the company seems to be targeted to AI researchers. But I wonder, how they want to monetize this user base to an extent that justifies this valuation.

However, in VC land, this doesn’t seem to matter. No valuation is too high if someone from OpenAi is involved.

AI Monetization & Impacts on Society

Today’s valuations, especially from the frontier labs like OpenAI and Anthropic, cannot be justified by assuming that users are buying 20 Dollars/month subscriptions. Although US companies are famously good at monetizing their businesses, a retail subscription model like Netflix gets you to maybe a 400 bn valuation if you are profitable.

The current valuation of OpenAI & Co however seems to assume rather earlier than later that their “agents” will fully replace many employees in companies. 

If a company actually would use an OpenAI agent instead of an employee, OpenAI’s pricing power would be significant because it would most likely be not so easy to change the agent (they will make sure of that).

At the end of the day, the pricing power would be up to the full salary (including social security etc) if the agent would perform at least as well as one human or maybe better.

Of course, to make it attractive for companies, these agents would be cheaper at first, but over time, especially American companies are very good at squeezing out the value from its customers (as we saw with Google/Meta and E-Commerce).

If I were a top manager from a big company, I would really ask myself how dependent I want to make myself from these AI companies (and/or Microsoft, Google etc). You might save some money in the short run, but pay dearly in the long run. It’s easier to fire and hire people than AI Agents.

This however assumes that those agents would be capable of fully replacing humans across many functions and areas. The current hype indiscates that this might be the case in only a few months time. Sam Altman famously predicted AGI by 2025.

One open question clearly is: What happens to society if suddenly a lot of people get unemployed ? The Tech Bros would answer that those people would be free to do something that they even like better and is good for society, but I do see a risk that this would not go so smoothly, especially as the big US Tech companies are very good in not paying their fair share of taxes. 

However, if we learned something from Elon Musk, it is that the revolutionary thing seems to always take 10 years or more longer than people previously thought.

AI Insurance exclusions

According to the FT, large Global Insurers are scrambling to exclude AI related claims from their corporate policies. This is quite interesting in my opinion as it clearly puts some restraints on companies using agents instead of humans. A mistake of a human almost always insured, a mistake by an AI Agent maybe not. 

There already have been incidences with significant losses for companies, so that’s clearly an interesting development to monitor.

The cautiously optimistic scenario

So the most likely scenario is that AI might well be not so disruptive but rather like Andrej Karpathy mentioned, longer term transformative.

This is what I would call the cautiously optimistic scenario which however implies that some of these AI players will run into major issues in the next 18-24 months. Subsequently, some of those proud “Infrastructure Data Center” owners will find out how enforceable those trillion dollar contracts with OpenAi and Anthropic really are.

Hopefully society has some time to digest this impact better than the “AGI next year” scenario. 

One interesting milestone will be if OpenAI manages to do an IPO. This would be really a mega event. I also believe that Sam Altman has the potential to create a cult around the stock, similar to his frenemy Elon Musk with Tesla.

However, I do believe that the date of the IPO will be already after the peak of the current “AI craze”, once they see that private capital will not be there to continue to fund their cash burn.

In any case, the next 18-24 months will be quite interesting to see how this plays out.

4 comments

  • Nice analysis.

    One amusing paradox is the bull case for AI is based on AI working enough to find billable use cases for the incumbents, but not well enough for “eat that moat” type of prompts to function…

  • great stuff!

    afraid you’re jumping too fast to conclusion on the xAI/Elon part though… 😉

    https://grok.com/share/bGVnYWN5LWNvcHk_b914d78a-1d4f-4fdd-90b3-102e9e9dcef8

    “…though the article frames it as potentially manipulated rather than core training bias. If the blogger implied intentional xAI favoritism, that’s more interpretive than the article’s neutral reporting.”

  • princecasually90eff25a99

    Developments in AI monetization are going to be pivotal for markets. People are pouring their lives into these tools, meaning there has never been more data available for personalized ads. But going that route risks scaring people away. The trust could be fickle. ‘Artificial employees’ are a longway away. Investors seem giddy for monetization, and investors seem to be disappointed that the hyperscaler investments aren’t being turned into revenue.

    The push and pull of pressure is going to be interesting to follow the next few months.

  • Great analysis. I appreciate how you highlight not only the hype and market optimism, but also the practical risks, especially dependency, infrastructure fragility, insurance exclusions, and the real societal consequences if AI displacement happens faster than expected. A balanced and refreshingly grounded perspective.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.