> Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
Well… because it is not almost possible do it solo.
Code is just one part of puzzle. Add: Pricing, marketing and ads, invoicing, VAT, make really good onboarding, measure churn rate, do customer service…
A lot of vibe coders are solopreneurs. You have to be very consistent and disciplined to make final product that sells.
> […] At one point, his spending on AI reached $100,000 a year. That went toward subscriptions to AI tools from Google, Anthropic and OpenAI, as well as fees to access their models directly through application programming interfaces,[…]
In media there was a rule 1-9-90. One creates, 9 comment, 90 use or are silent/don’t care.
Richard Branson realized that a company starts to behave differently when it reaches more than stuff of 135 people that coincides with average number of people you can consider as personally known to you.
Context switching is a bitch. You cannot do it for a long time. Abundance brought by AI will somehow consolidate as people cannot digest everything created by it.
There are more than 45,000 models avail at HF (if I remember it right). Choose wisely :)
One potential solution to this is AI summarization. Imagine coming home, and while preparing dinner your AI assistant recounts what happened in all your favourite tv shows that day. Then while you're doing the laundry, it tells you about all the new games it found and tested for you.
These are just thought starters, but something like this could significantly raise the ceiling on what one person is able to consume in a 24 hour period.
Adults tend to forget that they gained their powers of reasoning by exercising them.
Getting a summary, the way you described it, will be minus the effort required to think about it. This is great for information that you are already informed.
This is related to the illusion of explanatory depth. Most of us “know” how something works, until we have to actually explain it. Like drawing a bi-cycle, or explaining how a flush works.
People in general are not aware of how their brain works, and how much mental exercise they used get with the way the world is set up.
I suppose we can set up brain gyms, where people can practice using mental skills so that they don’t atrophy?
RSS’ death is real - 15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.
So if you want to make news feed from news sites, you have to use parsing their html code, and ofc everybody has its own structure. JS powered sites are painful ones.
15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.
It may be a reflection of where you get your news.
New York Times, Washington Post, Wall Street Journal, Radio Free Europe, Mainichi, and lots of other legitimate primary source Big-J journalism news sites have RSS.
Rando McRepost's AI-Generated Rehash Blog? Not so much.
I don't know, I also only use RSS (with the exception of Reddit I think) so I would not even notice a website that a) provides content I want to get notified about and not actively visit for a reason and b) has no feed.
It is somehow less funny today but in the 90's we would say "is there something wrong with your hands?"
A truly funny story: I wrote an rss aggregator and one day I discover some feeds had died without me noticing it. I looked at the feed, it was gone, I look at my aggregate and the headlines were all there?!?!
Since I gather a lot of feeds I couldn't help but noticed that a very large amount isn't wellformed. For example, in xml attributes the & (in urls) is suppose to be &, if you do that however many aggregators won't be able to parse it.
Every other month I wrote little bits of code to address the most annoying issues.
1) if I cant find a <link> or <guide> etc I eventually just gather <a>'s and take the href.
2) if I really cant find a title for the item I had it fail back on whatever is in the <a> since I was gathering those anyway.
3) if I cant even find an <item> I just look for the things that are suppose to go in the <item>
4) if I cant find a proper time stamp ill try parse one out of the url
5) if the urls are relative path complete them.
What was actually going on: The feed was gone, it redirected to the home page. In an attempt to parse the "xml" it eventually resorted to gathering the url and title from the <a>'s and build valid time stamps from the urls.
Mistral used to serve a feed actually up until 6ish months ago I guess? Their admin console used to be built with HTMX too which I found kinda interesting.
Now the news site and admin console is all in Next.js and slow and no feed.
reply