Hacker Newsnew | past | comments | ask | show | jobs | submit | rednafi's commentslogin

I'm not bored of the technology per se but the people around it. The yappers, doomers, and the shills are insufferable.

I dream of a SQL like engine for distributed systems where you can declaratively say "svc A uses the results of B & C where C depends on D."

Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.


Isn't this a common architecture in CQRS systems?

Commands to go specific microservices with local state persisted in a small DB; queries go to a global aggregation system.


You could build something like this using Mangle datalog. The go implementation supports extension predicates that you can use for "federated querying", with filter pushdown. Or you could model your dependency graph and then query the paths and do something custom.

You could also build a fancier federated querying system that combines the two, taking a Mangle query and the analyzing and rewriting it. For that you're on your own though - I prefer developers hand-crafting something that fits their needs to a big convoluted framework that tries to be all things to all people.


I think SQL alone is great if you didn't drink the microservice kool-aid. You can model dependencies between pieces of data, and the engine will enforce them (and the resulting correct code will probably be faster than what you could do otherwise).

Then you can run A,B,C and D from a consistent snapshot of data and get correct results.

The only thing microservices allow you to do if scale stateless compute, which is (architecturally) trivial to scale without microservices.

I do not believe there has been any serious server app that has had a better solution to data consistency than SQL.

All these 'webscale' solutions I've seen basically throw out all the consistency guarantees of SQL for speed. But once you need to make sure that different pieces of data are actually consistent, then you're basicallly forced to reimplement transactions, joins, locks etc.


Datomic?

I follow this religiously. The process of posting is manual but it works fairly well if your intention is good and you're not blog spamming in different forums.

But I intentionally haven't added a comment section to my blog [1]. Mostly because I don't get paid to write there and addressing the comments - even the good ones - requires a ton of energy.

Also, scaling the comment section is a pain. I had disqus integrated into my Hugo site but it became a mess when people started having actual discussion and the section got longer and longer.

If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles. That's good enough for me.

[1]: https://rednafi.com


> If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles

Totally agree, I do the same as well on my site; e.g.: https://anil.recoil.org/notes/tessera-zarr-v3-layout

There are quite a few useful linkbacks:

- The social urls (bluesky, mastodon, twitter, linkedin, hn, lobsters etc) are just in my Yaml frontmatter as a key

- Then there's standard.site which is an ATProto registration that gets an article into that ecosystem https://standard-search.octet-stream.net

- And for longer articles I get a DOI from https://rogue-scholar.org (the above URL is also https://doi.org/10.59350/tk0er-ycs46) which gets it a bit more metadata.

On my TODO list is aggregating all the above into one static comment thread that I can render. Not sure it's worth the trouble beyond linking to each network as I'm currently doing, since there's rarely any cross-network conversations anyway.


Damn. I got a bunch of idea around atproto from this comment. Also found out your blog. I wish digging out human written blogs wasn't such a chore. I like the idea of blogs but their discoverability sucks big time.

I like Kagi's small web initiative to help people find personal sites: https://blog.kagi.com/small-web-updates

I just use HN as my comment platform. I have a Hugo short code that (very respectfully!) grabs the comments on a full rebuild, but only if those comments are not already cached and if the post is less than 7 days old. The formatting looks quite good on my site. Feel free to check it out at the bottom of this post: https://mketab.org/blog/sqlite_kdbx

Nice blog, thanks for this one: https://rednafi.com/go/splintered-failure-modes/. Well written - I only need to read that once and now remembered it.

> If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles. That's good enough for me.

If you have a Mastodon account, you can embed all responses to your Mastodon post into your site. See https://blog.nawaz.org/posts/2025/Jan/adding-fediverse-comme...


We have vibe leadership mills where AGI-pilled leaders are driving their companies into a death spiral.

There is literally no reason to write it in a JVM language in 2026 when better options exists. Either Go for simplicity and maintaininability or Rust to get the most out of the machine works.

Also, it'll be hard for them to lure good people to work on that thing. Absolutely no one is getting excited to write, vibe, or maintain Java.


I am not thrilled to use java, but it really does what it says on the tin. A customer copied the jar file I sent them to their as400 and it just worked. There is nothing quite like it.

Go binary says hello. No VM overhead. Everything is statically linked.

Hi go binary, unfortunately you don't exist, because there is no cross compiler for that platform. Also please don't crash if you ever do get cross compiled, since the target system doesn't understand your utf8 strings.

Mandatory read by Peter Norvig - even more relevant now

https://norvig.com/21-days.html


I come from a developing country where the only OS people know is Windows. Macs used to be too expensive, and Linux didn’t have any of the applications people would use (read: pirate) for work.

Typically, college students and teachers would get $500 dingy laptops from Asus, Acer, and Dell. A decade ago, those machines were fine. My mom used one for 7 years, right until they retired Windows 7.

Then the machines started becoming absolutely useless with Windows 8, 10, and now 11. 8GB machines are barely usable now, with constant Windows updates and all the background telemetry services maxing out the disk all the time.

Sure, people can turn off some of these rogue processes. But my point is - an OS should just disappear from the user’s view and let them work.

I don’t live in my home country and haven’t visited in a long time, but I’ve heard that people are really opting for second-hand MacBook Airs. Now with the MacBook Neo, more people will go that route.

Students are opting for cheap Windows machines and flashing them with Ubuntu to make them usable.


The world could use one less "how I slop" article at this point.

This reminds me of the early Medium days when everyone would write articles on how to make HTTP endpoints or how to use Pandas.

There’s not much skill involved in hauling agents, and you can still do it without losing your expertise in the stuff you actually like to work with.

For me, I work with these tools all the time, and reading these articles hasn’t added anything to my repertoire so far. It gives me the feeling of "bikeshedding about tools instead of actually building something useful with them."

We are collectively addicted to making software that no one wants to use. Even I don’t consistently use half the junk I built with these tools.

Another thing is that everyone yapping about how great AI is isn’t actually showing the tools’ capabilities in building greenfield stuff. In reality, we have to do a lot more brownfield work that’s super boring, and AI isn’t as effective there.


I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.


> Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day.

That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.


True at Doordash, Amazon, and salesforce - speaking from experience.

Mandates are becoming normal. Most devs don’t seem to want to but they want to keep their jobs.

Definitely a sign that workers aren't being exploited.

10k LoC per day? Wow, my condolences to you.

On a different note: something I just discovered is that if you google "my condolences", the AI summary will thank you for the kindness before defining its meaning, fun.


>Reviewing this junk has become exhausting.

Nitpick it to death. Ask the reviewer questions on how everything works. Even if it looks good, flip a coin and reject it anyway. Drag that review time out. You don't want unlucky PRs going through after all.

Corporate is not going to wake up and do the sensible thing on its own.


Ha ha I wish. Then both corporate and your coworkers hate you.

Also, there is no point in asking questions when you know that they just yoloed it and won't be able to answer anything.

We have collectively lost our common sense and reasonable people are doing unreasonable things because there's an immense amount of pressure from the top.


It's their share price. Vibe code gets vibe reviews. #shipit

I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.

I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.


I also work at a huge company, and this observation is true. The way AI is being rammed down our throats is burning out the best engineers. OTOH, the mediocre simian army “empowered” by AI is pushing slop like there’s no tomorrow. The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

The resilience of the system has taken a massive hit, and we were told that it doesn’t matter. Managers, designers, and product folks are being asked to make PRs. When things cause Sev0 or Sev1 incidents, engineers are being held responsible. It’s a huge clown show.


> The expectation from leadership, who tried Claude for a single evening, is that you should be able to deliver everything yesterday.

"Look, if the AI fairy worked like that our company would be me and the investors."

I should make t-shirts. They'll be worth a fortune in ironic street cred once the AI fairy works like that.


Tech companies. How about massive non software tech companies. I don't know where it is not the norm and I have been in very many of them as supplier for the past 30 years. Tech companies are a bit different as they usually have leadership that prioritizes these things.

None tech companies too. You can’t build large scale software with everyone merging PRs like that. My guess is that if you’re a supplier your are getting a pretty severe sampling bias.

I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them

Half the posts here are talking about how they 100xd their output with the latest agentic loop harness, so I'm not sure why you would get that impression.

Neither of those. The pay is great and if all leadership cares about is making the whole company "AI Native" and pushing bullshit diffs, I'll play ball.

Claude has a built-in /simplify command

I think you just need to add a /complexify one with the same pattern, ask the AI to make everything as complex and long-winded as possible, LOC over clarity


I do “TDD” LLM coding and only review the tests. That way if the tests pass I ship it. It hasn’t bitten me in the ass yet.

The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.

If you run a loop alongside the agent and make PRs that are tractable, then there isn’t much difference. But to me, it seems like we have collectively lost our minds and think it’s okay to make a 10k LOC PR and ask someone else to review it.

In my experience LLMs also suffer massively from "not invented here" syndrome. I've seen them copy whole interfaces just to implement a feature that was already implemented in a dependency.

All with verbose comments that are just a basic translation of the code next to it.


10k, really? Are you supposed to understand all that code? This is crazy and a one way street to burnout.

Yep and now we are encouraged to use AI to review the code as well. But if shit hits the fan then you are held responsible.

Use AI to review.

Shhh...you're only supposed to unilaterally praise it to get along with your clueless leadership.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: