Hacker Newsnew | past | comments | ask | show | jobs | submit | digdugdirk's commentslogin

Interesting. I tend to imagine the exact opposite - it seems as though this is the big players realizing there's not much left from training better models, so they're focusing on scaffolding to get their improvements. If that's the case, why wouldn't a company completely devoted to the scaffolding not do a better job at that goal? And more importantly, why would I lock myself into their system?

Because the model labs can RL against their own scaffold, which will be borderline unbeatable

It's inherently limited by its geometry kernel. Most "real" CAD suites use something like parasolid, usually with a bunch of extras slapped on top. Making a new one from scratch is a massive undertaking, but I'll remain forever hopeful that we get a new, modern, open-source kernel one of these days...

This isn't really true. The vast majority of problems are in the UI. The geometry kernel is limited, but it's good enough for an open source project. Compared to say OpenSCAD, Open CASCADE is leagues ahead.

I don't necessarily agree in this case - OCCT is more than capable for what FreeCAD is offering. Add to that the development trajectory of OCCT also seems to be really taking off recently (with the 8.0-RC, they've re-worked how all B-Spline algorithms work, with implications for all operations).

Not gonna lie I just hope the rewrite it in rust community takes a stab at it at one point,

There are already at least two geometry kernels being written in scratch in Rust (see fornjot.app for one) --- the problem is the first parts are obvious/easy, so initial progress is rapid, then one hits the difficult/intractable parts and progress stalls, usually to be abandoned.

There are a couple of doctorates available for folks who are willing to research and publish in this space --- the commercial products are all holding their solutions as trade secrets in their code --- even then though, the edge cases are increasingly difficulty to solve in such a way as to not break what is already working, hence the commercial kernels having _very_ large teams working on them, or at least that is my understanding from what Michael Gibson (former lead developer of Rhino 3D, current developer of Moment of Inspiration 3D) has written on the topic.


What's wrong with OCCT?

Commenting here in case you or someone else remembers what this is. I'm always on the lookout for practice resources I can recommend to CAD beginners.


Honestly? I've been playing with using LLMs specifically for that reason. I'm far more likely to make prototypes that I specifically intend to throw away during the development process.

I try out ideas that are intended to explore some small aspect of a concept, and just ask the LLM to generate the rest of whatever scaffold is needed to verify the part that I'm interested in. Or use an LLM to generate just a roughest MVP prototype you could imagine, and start using it immediately to calibrate my initial intuition about the problem space. Eventually you get to the point where you've tried out your top 3-5 ideas for each different corner of your codebase, and you can really nail down your spec and then its off to the races building your "real" version.

I have a mechanical engineering background, so I'm quite used to the concept of destructive validation testing. As soon as I made that connection while exploring a new idea via claude code, it all started feeling much more natural. Now my coding process is far more similar to my old physical product design process than I'd ever imagined it could be.


Hmm... Definitely seeing my weekly usage tick up during the off peak timeslot, so like most things relating to our current AI ecosystem... be careful, your experience may be different than what is claimed.

"Does bonus usage count against my weekly usage limit?

No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan."


Why wouldn’t your weekly usage “tick up”? And it’s only the additional five-hour usage that doesn’t count against the weekly quota, it’s not that off-peak usage only counts half.


Super cool! Do you do any analysis or have any tools that help you identify these circuits? I came across this [1] recently, and wanted to try to identify specifically strong "circuits" in what seems to be a similar way to what you did.

[1] https://weightwatcher.ai/


I build my own analysis tools. I'm just finishing up running the current generation of LLMs (MiniMax M2.5 and the Qwen3.5 family), and then I will put it all on Github.

It less 'tool', than an assorted set of scripts, tailored to my unusual hardware setup. But it should be easy to extend; I would have released this earlier but I had the (stupid) idea to 'write a paper' on this. Aiming for that delayed this a year. Blogs are the way to go (for me).


What is the best consumer friendly long-term storage medium? Are we still better off with high capacity dvd/Blu ray discs?


Recordable blu-ray discs have a reported lifespan of hundreds of years if left untouched, but the high-capacity ones (128GB) are not especially cheap right now and I assume the writing process is slow. The drives themselves may not be easy to come by in future decades. But they are your best bet for "I want my data to outlive my grandchildren."

For the rest of us, a USB spinning rust hard drive formatted as exFAT is going to be hard to beat. You'll be able to plug this into virtually any computer made in the next few decades (modulo a USB adapter or two) and just read it. They are cheap (even allowing for the rising cost of storage), fast, and most importantly, they are easy. The data is stored magnetically, so is not susceptible to degradation just from sitting like SSDs or flash drives are.

Of course, you should not store any important data on only ONE drive. The 3-2-1 backup rule applies to archives as well: 3 copies, 2 different media, 1 off-site.


I recently went through this exercise and settled on HFS+ over exFAT. Reliability seems a bit better with some edge cases, and I don’t expect I’ll be put into a situation where I’m not able to read HFS+ drives.

(Though probably not appropriate if you’re primarily not a mac user, or won’t be in the future.)


This is an… interesting choice for archival purposes. What exactly do you think makes HFS+'s reliability better? The only thing I can think of is that HFS+ has journaling while FAT and derivatives do not, but that doesn't particularly matter after the data is on the disk and it's cleanly unmounted (which should be a safe assumption in most archival scenarios).

The Linux HFS+ driver is basically unmaintained, and cannot write to journaled disks. On Windows, the only choice a paid driver. I guess it's fine if you're strictly a Mac user, but it's a real problem if you need to access the disk on another machine. Even if you don't, I still wouldn't trust Apple for long-term support of anything.

Meanwhile exFAT has native support on Windows, Mac, and Linux, and there are drivers for BSDs and others.

So 20 years down the line, you'll certainly have something that can read an exFAT drive without much if any pain, regardless of which platform you're using at the time. HFS+? Who knows.

That said, I'd consider ZFS or btrfs for HDD archival. Granted broad (Mac/Windows) support is weaker than FAT, but at least the filesystems are completely open source. But what really makes them interesting is their automatic data checksumming to detect (and possibly repair) bitrot, which is particularly useful for archival.


> This is an… interesting choice for archival purposes. What exactly do you think makes HFS+'s reliability better? The only thing I can think of is that HFS+ has journaling while FAT and derivatives do not, but that doesn't particularly matter after the data is on the disk and it's cleanly unmounted (which should be a safe assumption in most archival scenarios).

Yes, journaling. Power cuts or unclean unmounts are enough of a risk for me that I don't see any reason to use a file system without journaling.

> The Linux HFS+ driver is basically unmaintained, and cannot write to journaled disks. On Windows, the only choice a paid driver. I guess it's fine if you're strictly a Mac user, but it's a real problem if you need to access the disk on another machine. Even if you don't, I still wouldn't trust Apple for long-term support of anything.

I just don't expect Linux or Windows support to be relevant to me or my family's use, or the cost of the Windows driver to be a problem if it ever came up.

If in a decade Apple drops HFS+, it's not something they're going to do without notice, it's something where I'll have plenty of notice to take the relatively small required effort to migrate my archives to a different file system.

> That said, I'd consider ZFS or btrfs for HDD archival. Granted broad (Mac/Windows) support is weaker than FAT, but at least the filesystems are completely open source. But what really makes them interesting is their automatic data checksumming to detect (and possibly repair) bitrot, which is particularly useful for archival.

I use btrfs for non-archival storage, but don't really see it as useful for archival storage - it's effectively unusable for my wife if I get hit by a bus.

> So 20 years down the line, you'll certainly have something that can read an exFAT drive without much if any pain, regardless of which platform you're using at the time. HFS+? Who knows.

You're optimizing for a problem that isn't in my risk assessment - i.e. I don't care if can shelf a drive and easily read from it in 20 years, I just want to maximize reliability over a 20 year timespan where I'm willing to take maintenance action if required. (And I think you're overly negative on Apple's support of old tech. e.g. Apple's didn't drop software Firewire support for a decade after they stopped selling their last Firewire device - that's plenty of time for a migration if my archival drives were using a Firewire connection. HFS+ is Apple's currently-supported file system for non-SSD storage, and I don't see a medium-term path where they extend APFS support to HDDs or drop HDD support entirely.)


You're assuming Apple is going to continue even supporting HFS+ long term. They already convert volumes to APFS opportunistically.


APFS is generally not appropriate for HDDs, so yeah, I expect they'll keep supporting HFS+ for as long as they keep supporting non-flash storage.

In any case, if the situation changes, I expect there'll be enough lead time for me to adjust my strategy -- the failure scenario is completely different than rotting physical media.


I decided to go with NTFS for the filesystem as it has journaling. Works fine on Linux, and obviously Windows. For macOS there are various add-ons that support NTFS, but my use case there is read-only.


Probably depends on what “consumer-friendly” entails, how it’s stored, and the quantity of data.

If we’re talking the average tech-illiterate to literate-but-cost-and-space-constrained person, probably Blu-Ray. A burner+reader combo with a stack of dual-layer discs is probably cost-effective. High-capacity HDDs would probably be equally effective if you can guarantee that they’re stored away from accidents and mishandling, but if it requires a SATA-to-USB adapter with assembly then it might possibly be out of reach for some consumers, and any risk of damage from movement could rule it out entirely.

If we’re talking tech-savvy consumers who don’t have the IT budget of a corporation, maybe LTO-5 or LTO-6 tapes could work. Tapes themselves are very affordable and have a good shelf lifespan. Used libraries can be had for under $600. The primary issues would be finding one with an interface that works with your existing equipment and software to support tape read and write.


Paper.

Not even kidding.

With any other media, you have to hope that the drives are still available. Paper routinely lasts hundreds of years and we all have readers built right in.


I like simple solutions. But paper has severe storage capacity limitations, which makes it impractical for storing large amounts of data.


that's one of my AI shower thoughts, an improved version of https://www.ollydbg.de/Paperbak/ s basic idea: write compressed computer data to barcode (1d), QR code (2d), multi-color barcode ('3'd), and so on.

But I keep hoping someone will finish the 'use lasers to burn 5d storage into glass chips' project silica concept and bring it to market so I can have isolinear star trek chips.


What's long-term? I have some dvd-rs that push 20-25 years and despite the plastic getting brittle they still work. I also have some ide drives that still work without problems after 40 years. I would rather aim for 20 years and upgrade the storage device if I still need to retain the data.


That's a thought I hadn't had. The plastic of the disk getting so brittle it shatters in the drive due to age. I wonder what's the embrittlement profile of polycarbonate stored in reasonable condition.


Brittleness is not a concern. "Disk rot" is. The dyes used to make writable DVD's were organic (AZO usually), and break down starting at around the 17 year mark (some earlier, if they were poorly made). They have some measure of redundancy built-in, so you may not notice right away. The discs begin to look a bit "cloudy" at first. Eventually they become unreadable.

Go with inorganic blu ray media if you want longevity. Most HTL blu rays made currently will last around 100 years if properly stored. If you need longer there are M-Disc's, but they are expensive and rumor has it that ALL verbatim 100Gb blu rays are essentially M-Discs with different labels these days.

For all practical purposes any Blu ray larger than 25Gb is probably inorganic HTL, but if you worry a lot you can buy more expensive "archival grade" discs from Japan as well that have been vettted and tested.


I've personally never noticed brittleness in old optical discs (unlike the polystyrene jewel cases, which often turn brittle). I don't think shattering is likely, but if it's a concern some optical drives allow limiting the maximum spin speed. If the drive supports it you can temporarily set it with the -x option of the "eject" command from util-linux.


I've been a big fan of M-Disc BD-R.



Consumer? Apple or Google Photos or 'drive' functionality of either. The only real risk then is losing your account and Apple Photos has an option to keep them all locally on disk.


To be pedantic, the post you responded to asked about "storage medium", not storage services, which leads to the question of what storage medium they use and how long the services will be around.


There are long term storage type BR discs built for durability (M-Discs from one brand for example).


It does really depend on how much data you want to store, but if you've got a lot of it…

Tape.

Obviously extreme prosumer, but for long-term archival of lots of data, LTO tape wins in several ways:

- Discs just aren't actually that high capacity relative to modern HDD capacities. BD XL maxes out at 128 GB, while there are now 30 TB HDDs readily available. That's 240 discs per HDD. Modern LTO tapes store 12-18 TB, or 2-3 tapes per HDD.

- Anything flash-based is a bad choice for long-term storage. SSDs are very fast, but also (relatively) expensive at 15-20¢/GB. Reputable SD cards are in the same neighborhood. Despite the OP redditor's results here, flash is only expected to retain data for 5-10 years.

- Tape is the absolute lowest cost-per-GB you can find of any storage medium. At the moment, LTO 8/9 tape can be had on Amazon for ½¢/GB. Compare with BD-R at 2¢/GB, or BR-R XL M-disc at 15¢/GB. HDDs (spinning rust) are 2-3¢/GB.

- Consider also write speed. LTO can write 300+ MB/s. BD 16x maxes out around 68 MB/s.

- Manufacturers rate tapes for 30 years sitting on a shelf, and it wouldn't be surprising if they still read after 50 years¹. Plain BD-R lasts 5-20 years. M-disc is the interesting outlier, rated 100-1000 years.

Of course, the biggest problem with tape is the drives. While the media is dirt cheap, the drives are crazy expensive. It looks like you can pick up a used LTO-6 drive (2.5 TB tapes) on ebay for around $500. A brand new LTO-9 drive (18 TB tapes) will be $4000-5000.

In terms of breakeven points, a used LTO-6 drive + tapes beats plain BD after about 25 TB. Because of the cost of M-discs, they stop making sense after 1-2 TB. Purely on cost, a brand new LTO-9 drive + tapes doesn't beat used LTO-6 + tapes until about 800 TB (LTO-9 tape is ½¢/GB while LTO-6 tape is 1¢/GB), but there's definitely a point in there where the larger capacity of LTO-9 makes dealing with the physical media a whole lot easier.

So if you're looking for long-term storage for your photo album, a M-disc BD XL is probably a good choice. If you only have a few hundred GB of data, a couple discs + burner can be had for $300 or so, and you can be pretty sure your mom could manage to read the disc if necessary.

But if you're looking to back up your 100 TB homelab NAS, discs are not really feasible. You'll have to spend the next month swapping discs every 25 minutes², and then deal with your new thousand disc collection. Here's where a used LTO-6 drive makes a lot of sense. This is a real sweet spot if you can find a decent drive; all-in you'd spend about $1500 to back up your 100 TB.

This is what I do to backup my NAS — found an old LTO-6 drive and got a bunch of tapes. The drive plugs in to a SAS port (you might need a HBA PCI card, $50), and that's pretty much it. Linux has the drivers built in; it will show up as /dev/st0 and you can just point tar³ at it.

Finally, just to compare with cloud options, storing that 100 TB in AWS Glacier Deep Archive would run you slightly over $100/mo, so you're ahead with your own tapes after little over a year. Oh and don't forget to set aside an extra $8000 for data transfer fees should you ever actually want to retrieve your data lol.

---

¹ eg the Unix v4 tape that was recently found and successfully read after 52 years — https://news.ycombinator.com/item?id=45840321

² Or get a disc-swapping robot, but those run $4000-5000, at which point… you're better off with a brand new tape drive.

³ Thus using the Tape ARchiver program for its original purpose. Use -M to span tapes, tar will prompt you to swap.


Honestly just wanted to thank you for this write up on the consumer side of tape storage. It doesn't look like my archival needs are at this level just yet, but this is an amazing overview and starting point if/when I get to that point. Thanks again!


Honestly: multiple copies of encrypted cloud storage. (Encryption just for privacy.) You need decentralized backups anyway. Alternatively, two NAS systems with some RAID variation in different locations that back up each other can be more cost-effective for large capacities.


You're talking about backups which you wouldn't normally need to keep for decades and will be powered on regularly anyway. If it's archival, such as family photos for your kids when they grow up, cloud storage can lose them if you die or go to prison or for whatever reason don't keep paying the bill.


If you go to prison, you can lose whatever media you have as well. I wouldn't rely on a single cloud storage provider, but mirror on multiple ones, and mirror on one or more local device as well, at least for the most important data. I wouldn't use physical media as primary backup copies today: long-term durability, and availability & support of matching peripherals are uncertain, and they don't make proper backups with redundancy easier, nor their verifiability.

For the kids, I'd rather make physical photo albums.


If there's so little data that you can fit it in photo albums, this is all moot because you can literally just store them in printed form on paper which will easily last a lifetime. Flash is what you might be tempted to use for 10s-100s if GB, including videos.


Systems are complicated. Given there are numerous predicted outcomes (it's not just about the actual measured sea-level rise, after all) and many of those predictions are coming to pass far earlier than hoped, it might be worth having an open mind to the fact that sometimes people who devote their lives to studying something might be worth listening to.


There is a substantial difference between the standard lobbying and greasing the legislative wheels, and what's going on with this current administration.

Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.


> Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend

I'm not sure that's better. I'm hopeful that all this open air corruption leads to real reform. But I'm sure I'll be disappointed.


At least of the previous couple US election, "people" paid more than a billion dollar each wanna-be president

That is investment aka corruption


Very interesting! I hope it gets upstreamed soon, there's a ton of potential for "mental overhead" simplification in the nix ecosystem, this seems like it could be a huge help for that.


There is an open PR against upstream


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: