Costs went down so much that I can heat my home with AI that can reason better than o1-mini running on my gaming GPU. Of course if "Open"AI had their way we would have to pay for this tech a top buck. The only reason they made ChatGPT site with free conversations is that this tech is actually easy to make. We could have LLMs years ago – some times it is enough to make something to have explossion. Of course hardware had to mature – then again it was never an issue to make GPUs with much more VRAM. Training taking longer? No issue – it will just take longer. Less train data? LLMs would be dumber but still workable.
Great analysis, bad framing or at least incomplete. There's capex/NRE requirements to make these things useful in the first place. The price still needs to come down. If you want computation/"intelligence" to be internet-like in terms of value and ubiquity. Consider how many people are even using it, and how often they're using it. Like imagine you have no primary income, etc and you're just going to start building a product or svc. What's the 1 quarter upfront cost of building a useful AI product? And, you have to keep improving because technological ceiling hasn't been found yet. A GPU-rich kid in Cambridge or Guanzhou can fuckup billions of dollars of previous investment.
This is good, and what we should want and expect from innovation capital markets.
its like bitcoin – the sensible price floor is the electricity cost of the execution time. The only competition is how little compute do you need for a "good enough" the long run thinking models, if running at profit per token, are where the money is; its a license to churn through huge amounts of tokens as fast as you can generate them. Running a basic q&a chatbot is going to get cheaper and cheaper, but the level of planned activity an AI can do within a reasonable cost/time threshold will balloon.
The value for me is in the agentic features, the memory it has, the creativity and having something that has context or be a useful assistant, remember things and pack it all together. Open AI seem to be pretty good with that and why I haven't switched over. I use the chatbots not the API.
Nevertheless the power of these models are phenomenal. When we all discovered chat gpt we all had our minds blown. Imagine if you'd seen that in 2019 or 2018? Its a huge leap in computing.
They should focus less on pricing or underlying models. More on the value – i.e. understanding what's valuable to the user in the human world, creating use cases, becoming a actually useful assistant and personable, rather than churning out slop or bullet point lists.
We really haven't seen what these models are capable of on that front. If that gets mastered they could charge thousands of dollars a month. If they can innovate or manage projects, then they could make millions in new value.
This is still nerd territory and this power isn't really being used or applied yet. Its like a rocket engine that's revving up
At the start of LLM's everyone assumed there would be a winner (Like google in the search engine wars). But I don't think this is that kind of product. Everyone can make LLM's and they are easily interchangeable. This isn't a winner takes all scenario.
With the amount of GPUs, CPUs and Neural Network hardware they have, they should be switching by NOW to sell "Infra as a service" to their competitors, just because they can't compete and that is a fact.
OpenAI is not able to compete in a long run, they are just too heavy, its like your uncle is carrying a bag of charcoal to the barbecue and wanting to race Usain Bolt; that amount of hardware they run is HELLA expensive and there is nothing they can do to circumvent it.
They are betting on dropping a model so significantly better, powered by that behemoth of a structure, that the competitors will not be able to beat without also using heavy infra, and that their edge will then be too large for some good years, where they'll try to capitalize. I don't distrust it, but it is BOLD.
I feel obligated to mention that this was recorded before gpt-4.5 dropped, which is the most expensive model…ever? Yeah. 🙃
15:03 – O1 Pro (literally 10x O1) be like.
Isn't claude already better than chat gpt?
Open AI will be remembered like Yahoo and Myspace. Seemed dominant at the time but lost to better competitors.
it's nuts how under-mentioned grok is when a beginner coder can absolutely merc on it compared to other models
hey common, you are showing in your charts claude 3.5 ??? where is claude 3.7 ?? Its code capabilities are just insane!!
🕵🏻♂️ This will ultimately turn into a open source vs subscription model, just like animation software is today.
hopefully they all somehow lose ❤
Great video, bud. Learned a lot! What software were you using to draw with?
Costs went down so much that I can heat my home with AI that can reason better than o1-mini running on my gaming GPU.
Of course if "Open"AI had their way we would have to pay for this tech a top buck.
The only reason they made ChatGPT site with free conversations is that this tech is actually easy to make. We could have LLMs years ago – some times it is enough to make something to have explossion. Of course hardware had to mature – then again it was never an issue to make GPUs with much more VRAM. Training taking longer? No issue – it will just take longer. Less train data? LLMs would be dumber but still workable.
Great analysis, bad framing or at least incomplete. There's capex/NRE requirements to make these things useful in the first place. The price still needs to come down. If you want computation/"intelligence" to be internet-like in terms of value and ubiquity. Consider how many people are even using it, and how often they're using it. Like imagine you have no primary income, etc and you're just going to start building a product or svc. What's the 1 quarter upfront cost of building a useful AI product? And, you have to keep improving because technological ceiling hasn't been found yet. A GPU-rich kid in Cambridge or Guanzhou can fuckup billions of dollars of previous investment.
This is good, and what we should want and expect from innovation capital markets.
what do you use for notes i like the way this looks
please do a normal haircut
its like bitcoin – the sensible price floor is the electricity cost of the execution time.
The only competition is how little compute do you need for a "good enough"
the long run thinking models, if running at profit per token, are where the money is; its a license to churn through huge amounts of tokens as fast as you can generate them.
Running a basic q&a chatbot is going to get cheaper and cheaper, but the level of planned activity an AI can do within a reasonable cost/time threshold will balloon.
I mean we are already seeing that, with Claude DIRECTLY COMPETING WITH THEIR BIGGEST CUSTOMER; CURSOR with their new release: Claude Code.
Absolutely fascinating. What a time to be alive. Great video, thanks Theo.
3.7 claude on deep think mode is better than o3-mini-high. Best coding model right now.
The value for me is in the agentic features, the memory it has, the creativity and having something that has context or be a useful assistant, remember things and pack it all together. Open AI seem to be pretty good with that and why I haven't switched over. I use the chatbots not the API.
Nevertheless the power of these models are phenomenal. When we all discovered chat gpt we all had our minds blown. Imagine if you'd seen that in 2019 or 2018? Its a huge leap in computing.
They should focus less on pricing or underlying models. More on the value – i.e. understanding what's valuable to the user in the human world, creating use cases, becoming a actually useful assistant and personable, rather than churning out slop or bullet point lists.
We really haven't seen what these models are capable of on that front. If that gets mastered they could charge thousands of dollars a month. If they can innovate or manage projects, then they could make millions in new value.
This is still nerd territory and this power isn't really being used or applied yet. Its like a rocket engine that's revving up
At the start of LLM's everyone assumed there would be a winner (Like google in the search engine wars). But I don't think this is that kind of product. Everyone can make LLM's and they are easily interchangeable. This isn't a winner takes all scenario.
I notice AI safety doesn’t appear anywhere here, but was one of the first things to be sacrificed in the race to the bottom.
With the amount of GPUs, CPUs and Neural Network hardware they have, they should be switching by NOW to sell "Infra as a service" to their competitors, just because they can't compete and that is a fact.
OpenAI is not able to compete in a long run, they are just too heavy, its like your uncle is carrying a bag of charcoal to the barbecue and wanting to race Usain Bolt; that amount of hardware they run is HELLA expensive and there is nothing they can do to circumvent it.
They are betting on dropping a model so significantly better, powered by that behemoth of a structure, that the competitors will not be able to beat without also using heavy infra, and that their edge will then be too large for some good years, where they'll try to capitalize. I don't distrust it, but it is BOLD.