The whole app lives in Firebase. Once a user hits a profile page, a cloud function runs, get the data from the DB, constructs the page, and caches it at the CDN for 24h.
If a user add/delete something on their profile, the cache is removed. Otherwise, the content is served from cache only with no need for a lot of computation.
Even if rush hour has 10 times more traffic than off hours, that's still only 4 requests per second. Any unoptimized Postgres on a 2/2 node should be able to handle these loads just fine.
Of course, hits are not homogeneously distributed. But an average of less than one hit a second leaves more than enough leeway for any reasonable clustering you may come with. Any small computer can handle thousands of times more than that.
There is some questions about bandwidth costs, that can vary wildly.
I wonder what optimizations were done so you can only pay for 1M+ hits?