Hacker Newsnew | past | comments | ask | show | jobs | submit | jtbaker's commentslogin

the Stepchange show went fairly deep on this topic in their first episode (listened to it recently). https://www.stepchange.show/coal-part-i

DB seems like the main shortcoming in the stack for them. I don't want to deal with the limitations of D1. Seems like a serverless postgres setup a la Neon/Supabase would be a slam dunk.

They have Durable Objects which should be enough for most use cases (it’s SQLite with no limitations). Have you tried that?

I've used DO's quite a bit. I'm a big fan... however I find the database latency pretty hard to deal with. In the past 6 months I've seen upwards of 30s for little side projects running tiny (100's of kb) databases. Sometimes it's lightning fast... sometimes it's a disaster.

As a consequence I've had to build quite defensively - adopting a PWA approach - heavy caching and background sync. My hope is that latency improves over time because the platform is nice to work with.


Yeah, but then I'm heavily coupled to their proprietary infrastructure. Maybe a good thing for them, but a nonstarter for thinking about building a real business on, for me and many others I'd presume.

our open source system. We use this tool to serve a custom routing engine at day job. Handles 100req/s djikstra in a 2GB pod, due to precalculation of contraction hierarchies.

> And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?

I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies.


DuckDB and SQL FTW.


Doesn’t matter. The point is that DuckDB can operate well on a wide range of infrastructure and is well suited for operating in resource constrained environments.


Post to HN apparently


The changelog is remarkable. Thanks to this team for creating such an amazing tool. It's genuinely the technology I've been most excited about in a long time. Makes the ergonomics of working with large data a joy and extremely fast.


IDK, they were sending around stacks of Mac Studios to tinkerer youtubers messing with EXO clustering like @geerlingguy.

https://youtu.be/1iT9JeZYXcI?si=UMR0nfHAYbVq2tF1


How do you ascribe a revenue number like that based on one collection of changes in a huge system? Presumably there were a bunch of other features being released around the same time as it. Was there a lot of A/B testing around it?


Amazon uses a complicated process called "attributed OPS". Meaning you may not be directly responsible for but you contributed in some way.


Doesn’t matter, @ChadMoran is already on the fast track whilst you are on a pip.


Hah, well I've done something right. They've let me stay here almost 15 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: