Hacker Newsnew | past | comments | ask | show | jobs | submit | nickdothutton's commentslogin

What a great article. Several years ago I wrote a (far inferior) post along similar lines, using a famous railway/bridge disaster[1]. Study of these kinds of engineering failures, even those from hundreds of years ago, are so revealing. I'd lay money on similar reports from the days of the ancient Egyptians as being just as valuable.

[1] https://blog.eutopian.io/the-age-of-invisible-disasters/


Hard to imagine now, but this was a huge turning point. A genuinely powerful CPU in a "Pee-Cee" available for less than RISC workstation money. I had to wait a while, mine was an AMD DX2-66 since I didn't have a budget for Intel... add Slackware... and countess hours messing with XF86config and I had a poor-mans Sun workstation.

These are interesting numbers for engagement but don't mean as much without equivalent stats for the other platforms. It's a little like when a news story quotes only a percentage (but not the absolute figure in $) or vice versa.

Agreed.

Assuming they use the same principles everywhere, they're getting more views on Mastodon and Bluesky? That is surprising.


Not really, their target audience is much more likely to hang out on Mastodon and Bluesky. So even if the impressions might be fewer the quality of them is almost certainly higher.

Also if you tweet a link to the content instead of tweeting the actual content, you get penalized by the algorithm.

They do this in almost every tweet.


This has changed recently. Links no longer appear to be penalized.

I use a separate user account per project on my local machine (not in sudoers), which I ssh to, and also runs tmux. If I need claude code in windows, then I run a VM. The performance and (in)convenience cost of this to me is minimal. I started working this way in order to limit the "blast radius" when claude went on a dependency binge within a project.

"Gradually, then suddenly", as someone once said.

Title is wrong, should be "New form of cancer discovered".

When I got started, the NSFnet backbone was a bunch of IBM RS/6000 systems with comms cards. There were no routers.[1]

[1] https://www.rcsri.org/collection/nsfnet-t3/


There were routers, just no T3 (45 megabit) capable routers.

Recently working on implementing a "MS Teams for the terminal" (video conf, audio, chat, file sharing, recording, etc, the usual things you would expect). Linux and Mac, FBSD and others to follow. Prior to the 1M context window I found I had to restrict myself to specific functionality/areas of the codebase. I'd gotten lazy anyway so this was no bad thing. Reduced the "vibe" quotient of the AI coding.

I wish someone would build a LinkedIn that was actually good. That you could actually do business over, and no I don't mean spam people with you BS cold emails which must have a 10000:1 success rate. I wrote a bit about this almost a decade ago and there is nothing.[1]

[1] https://blog.eutopian.io/building-a-better-linkedin/


xv was fast, stable, had a good interface, and useful far beyond the normal lifespan of such a piece of software. Used it all the time in the early 90s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: