My rule of thumb at the moment is that for anything up to 10GB of data SQLite basically Just Works. For 10-100GB it will work if you design your indexes carefully. Above 100GB gets harder - my hunch is that there are various tricks and optimizations I've not yet discovered to help make that work OK, but switching to PostgreSQL is probably easier for that kind of scale.
I've got SQLite databases in production that are well beyond 100 gigabytes. These size limits are completely meaningless without any background on actual usage patterns & business cases.
There is no arbitrary point in database size (prior to the exact stated maximums [1]) at which SQLite just magically starts sucking ass for no good reason.
In the (very common) case of a single node, single tenant database server, you will never be able to extract more throughput from that box with a hosted solution over a well-tuned SQLite solution running inside the application binary. It is simply impossible to overcome the latency & other overhead imposed by all hosted SQL solutions. SQLite operations are effectively a direct method invocation. Microseconds, if that. Anything touching the network stack will start you off with 10-1000x more latency.
Unless you can prove you will ever need more capabilities than a single server can offer, SQLite is clearly the best engineering choice.
My rules of thumb are based entirely off experiments I've done with Datasette, which tends towards ad-hoc querying, often without the best indexes and with a LOT of group-by/count queries to implement faceting.
You've made me realize that those rules of thumb (which are pretty unscientific already) likely don't apply at all to projects outside of Datasette, so I should probably keep them to myself!