2024 Predictions Post-mortem

Let’s start by reviewing last year’s predictions.

  1. LLMs will not be a useful basis for AGI
    • Not much new data on this, but there is increasing interest in other directions such as world models (World labs, DeepMind, rumors of Yann Lecun leaving to start his own venture).
    • The slowing growth in capability with the most recent crop of LLM releases seems to reflect that getting to AGI might not be a straight shot scaling affair.
  2. The main family of use cases for LLM will be as a new interface for interacting with computing.
    • This one I think I got fairly wrong.
    • Power users still benefit from being able to offload tedious tasks. For me that looks like not having to ever write another line of matplotlib code again.
    • There are genuine uses for LLMs that involve research, learning, writing throwaway code, knowledge retrieval beyond just a new interface.
    • Having good “taste” is more important than ever, which actually hurts beginners more.
  3. AI Over-investment
    • The possibility of there being an AI bubble has only grown more salient, not just by people working tech but by the public at large.
    • Shady financing deals are making the current AI boom seem very “bubble-like”, and there’s a lot of risk to whether the investments actually result in revenue increases soon enough to justify the cost.
    • Productivity gains don’t really seem to be materializing (in the data, at least at the scale commensurate to investment). However, I think most consumers of AI tools would be willing to pay a hefty premium over what they’re paying now, especially for SotA models.
    • I think at some point it makes sense to move compute from GenAI inference to other things, depending on how much X% incremental inference compute translates to Y% improvement in product. Once that ROI is low enough why not spend that somewhere else (either new approaches, new products). I’m less sure of when that will be, but already the ROI from scaling seems less and less.

Predictions

Here I’ll register some new predictions (in addition to 1 & 3 above which I still believe in) for tech in general.

  1. In big tech, I think Google is the horse to bet on.
    • They have the hardware advantage (TPUs).
    • They have the broadest portfolio of bets, both in terms of what they are developing in AI but also in terms of the products they can roll out to.
    • As an addendum, I don’t think the companies without a robust cashflow generating legacy business will do as well (OpenAI, Anthropic, etc. ).
  2. The AI boom/bubble won’t follow the script from the dotcom bubble.
    • Unlike internet access, there is almost no barrier for any technically literate person to try ChatGPT or any other GenAI tool, other than availability (so inference capacity really).
    • The industry may skip the “pop” part of the dotcom saga since growth can happen much faster.
    • At the time the dotcom bubble popped, around 40% of Americans were online. Right now the adoption rate for GenAI tooling is 54.6%.

Thoughts/Questions/Observations

  • What does it mean for the tech industry to become (much) more capex hungry?
  • In the USA, the power grid is the bottleneck to bringing more AI compute online. Is this the “dark fiber” of this current boom (not GPUs, which have relatively short depreciation cycles)?
  • AI is incredibly compute constrained right now, which results in some combination of these 3:
    • Be fine lighting a bunch of money on fire for some time to grow.
    • Only roll things out you can inference cheaply at scale (simpler models).
    • Be very targeted in who is able to access the model (move upmarket, charge expensive licenses)

Conclusions

Really the big question I have is how long investors are willing to lose money on GenAI. At some point I think there is a profitable business on the other side, but it’s probably going to take a lot of money and time to get there (plus picking the right horse).