It is well understood that many 32-bit systems will stop working in the year 2038 when the 32-bit time_t overflows.
On the other hand, MS-DOS, which internally uses a 1980 year-epoch with a byte-sized year counter (and does not use a seconds-based epoch unlike Unixes), will be fine up to 2107, and Windows NT's timekeeping has a range of approximately 30000 years. Despite the elegance of the 1970 seconds-epoch, it's interesting to see how other systems designed roughly around that time period, a little later but not by much, chose to do something else, and as a result became quite a bit more future-proof.
The time_t limit in Linux is inherited from UNIX, which baked it in circa 1970, back when it was developed on PDP-8s (manufacturing lifetime: about 50,000 machines total) so Ken Thompson could write a space wars game on an unused machine with no OS. And then migrated to PDP-11s circa 1972 when someone at Bell Labs wanted a typesetting system.
Back then, digital computers were only 25-28 years old. There was no such thing as legacy hardware back then, in our terms. Designing for a 68 year lifespan -- nearly three times the entire duration of the industry to that date -- probably seemed excessive!
Nit: PDP-7, not PDP-8. Cost 4 times as much and sold only 120 units, according to Wikipedia. Looks like 12-bit PDPs were a lot more affordable than 18-bit ones.
MS-DOS was released in 1981, UNIX considerably before then. Windows NT was not released until 1993. A decade has always been a long time in software, and it used to be longer.
GP's point still stands. MS-DOS fits that range without 64-bits, a separately correct design decision would have been to use 64-bits of time from the start. Poor software casts/stores time_t in int because that was good enough for decades - if it didn't fit an int from the start, they wouldn't be doing that (or if an int were enough by having different semantics, it would also be OK for them to keep doing that).
> GP's point still stands. MS-DOS fits that range without 64-bits
32-bit Unix timestamps would have a broader range than MS-DOS timestamps if they made the same sacrifice in resolution and only had 2-second precision rather than single-second precision [1]. And if Unix timestamps additionally used 32-bit unsigned numbers to sacrifice the ability to represent timestamps before the beginning of the epoch, then the year 2038 problem would instead be more of a year 2242 problem.
I work with a fully POSIX-compliant OS that uses 32-bit unsigned int time_t. Unfortunately, most software these days (including the runtimes of major production languages) assume incorrectly that time_t is an unsigned int because that's what it is in the GNU C library used on GNU/Linux, and they end up using it as a general arithmetic type. The end result is undefined behaviour. But hey, it just happens to work on Linux so it must be correct. Grrr.
Why would it result in undefined behavior? Overflow behavior is only defined on unsigned types.
Also, why should people care about your special snowflake OS? The C standard doesn't even specify that time_t has to be an integral type, but anyone who expects code to cope with a floating point time_t is deluded. Why is your case any different?
My special snowflake OS is embedded in millions of devices all over the world in safety-critical applications. It's not broken, it's the third-party software that incorrectly assumes time_t is a general integer arithmetic type that's broken.
The ISO C11 standard only says that time_t is a real type that holds values >= 0. The minute you start performing arithmetic on that type and checking if the result is less than zero you're entering undefined behaviour territory. For example, does the following program halt?
int main() {
for (unsigned i = 1; i >= 0); --i)
;
}
The answer is "maybe". The C standard defines an infinite loop like this as undefined behaviour ISO 9899:2011 [6.8.5/6].
> a separately correct design decision would have been to use 64-bits of time from the starta separately correct design decision would have been to use 64-bits of time from the start
On a 16-bit machine when memory cost close to a dollar a byte?
what percent of memory do dates usually use?
Even in a spreadheet, it wouldn't double needed memory
at worst it would probably be signficantly less then 10% extra memory
A PDP-11/20 could address (without extensions) 32 16-byte kilowords, or 64 kilobytes, of RAM. Most machines shipped with twelve kilowords, or 24 kilobytes, of RAM.
Tell me again how free the creators of Unix should've been with it.
Systems which can't update their libc or other system software have already broken in various ways. For example, in 2007 the Energy Policy Act of 2005 changed the date when DST begins and ends. Any system unable to update its libc data files since about 13-15 years ago will display the incorrect time. I have a cheap $10 radio clock which I've had for two decades. It always displays the wrong time for a few weeks around the DST change.
Any system which needs to display the correct time needs to have a mechanism for updates.
The time_t wraparound is a bit deeper and more serious, granted.
Sure, but what percentage of those systems will be alive in 18 years? I'm sure it is non-zero, but it is also probably small. Regardless, those systems can't be updated, so they will have the problem either way. That says nothing about whether new systems should be given 64 bit time stamps.
Is this correct? 32-bit will be dead in 2038? This is making me really concerned... Won't it just loop around? I still use a 32-bit laptop as my main writing PC... Acer Aspire One, from the age of netbooks. Even finding an up-to-date Linux distro was kinda hard... Is there a fix for the time_t?
The article has noted the kernel will be fine, but if I read that correctly, glib still uses 32bit time_t. And I thought I saw somewhere ext4 was changed to be fine until sometime around 2160 due to seconds granularity. I wonder how much this is a function of "never break user space." ?
The BSDs have no problem with breaking user space, Net/OpenBSD has been using 64bit time_t for a while. And starting with 6.8 OpenBSD's default file system uses 64bit timestamps.
With that said, I think Linux's "never break user space" is one of the main reasons Linux is so dominant. And that is making the time_t work harder to complete.
User space doesn't generally deal well with time going backwards, but the main problem is that setting an absolute timeout in a program is broken when the target time is less than the current time. An example of this is 32-bit systemd, which just hangs at boot when the RTC points to a time after 2038, as it tries to figure out of timers work.
https://www.adelielinux.org/ is an example of a Linux distro built on musl-1.2, so you can use that beyond 2038 on 32-bit hardware.
> No, it will not fail. In the worst case, from the viewpoint of a programmer it will work as expected: it will be reseted to date 1901-12-13 20:45:52:
Thank you.
For writing it's still amazing and if it still works in 18 that would be amazing, I did just buy a bigger longer battery via eBay! I don't know why but the keyboard feels so amazing... It's like a part of me, I don't even need to think of typing yet the words still appear! I've also put SSD storage into it and upgraded the RAM... It really just makes me very happy
Not at all. Simply interpret time_t as unsigned, recompile and it works until 2106.
There's no need for unix timestamps before 1970 anymore.
There are also other replacement tricks, as in perl5.
You'll find a large chunk of software breaks because it incorrectly uses time_t as a generic arithmetic type. Taking the diff of two unsigned integers and checking if the result is less than zero will end up making you weep.
I wouldn't be surprised at all if there were some original embedded systems still running machinery and the like in a century from now, catastrophic events notwithstanding. The sort of low-power embedded systems likely to be running DOS don't really experience any wear. There's quite a bit of equipment in the "low-level" manufacturing industry which is around a century old, and has basically been in continuous operation since it was installed.
Modern x86 PCs still can boot DOS, no? (If not, sounds like a bug.)
The bigger issue would be ISA control cards and the like. A HN user recently posted a link of "new" ISA boards for sale, but the latest CPU support was Pentium 4 (about 20 years ago).
(I wish they would make "ultimate" mobos with all the interfaces in between too --- I mean if ISA with a Coffee Lake CPU is possible, how about a pair of IDEs, floppies, and parallel ports too...)
Are there PCI IDE+floppy+parallel(+serial+joystick+PS/2+etc) controllers out there that are bug-for-bug indiscernible from the controllers mounted on motherboards?
I think most (if not all) modern PCI-E GPUs still supports this. If I am not mistaken, this is what is preventing them from working on ARM right now (Raspberry PI 4, Apple M1, etc.), because the driver expect specific BIOS features
Aaaand that's why X11 has an x86 (v86) emulator in it, so Suns (circa late 90s/early 2000s) could use PC-focused graphics cards with BIOSes containg x86 initialization code.
The problem with such adapters is that being USB based they're timing is far from deterministic. One very common use case for ISA based interfaces is industrial automation where realtime determinism is very common requirement.
Even if you assume it can boot DOS, it doesn't mean it can run DOS. No DOS has drivers for a lot of modern hardware, and they're never going to be written.
> (If not, sounds like a bug.)
BIOS is buggy? Say it ain't so! Say! It! Ain't! So!
My ~4-year-old workstation -- with a pair of 8C/16T Xeons, 128 GB of RAM, and NVMe storage in a PCIe slot (not to mention everything else) -- boots into and runs FreeDOS just fine.
Additionally, over the last several years, I've used it on probably a dozen or more different "makes and models" of enterprise server hardware to update firmware (BIOS/HBA/NIC/SSD) and can't recall experiencing any major issues while doing so (fortunately, more and more of these bootable firmware update utilities are now Linux-based!).
That's with FreeDOS 1.0, by the way. A quick check shows that it was released in 2006, making it a fair bit older than the servers I most recently used it on. For what it's worth, FreeDOS 1.2 was released four years ago and the third release candidate of 1.3 was just released this past summer. It seems that FreeDOS, at 26, is still alive and, in fact, doing quite well!
To be clear, I've not actually attempted to boot any of the other DOS systems in, well, quite a long time -- at least 10 years for MS-DOS and likely 20 years or more for DR-DOS and PC-DOS -- as I haven't had any reason to. FreeDOS has been a superior replacement since before the turn of the century.
In my case, FreeDOS is booted via (i)PXE and the "hard disk" (image), which contains the various firmware images, utilities, etc., is downloaded via HTTP and loaded as a ramdisk. In those rare instances when it's actually been needed, it is just so absolutely useful.
Honestly, I'd be a bit surprised if MS-DOS (a.k.a. PC-DOS) did not (for the most part) also "just work"(TM) ...
... and, now that I've said that, I'm fairly confident that MS-DOS 5.0 (1991, IIRC?) is also available on the netboot server. I can't confirm that at the moment but (assuming it is) now I'm quite curious so I'll try to remember to try booting my workstation into it the next time I happen to restart it.
Or in the industrial sector: Milling machines running Windows 95/98 (fun times getting it to talk with windows 10 on the network) or working with floppies are still around: Those beasts are too expensive to not run until dead.
Once a guy told me that MS-DOS would be alive in 2000, I laughed at him. It was 1994 or so. Worked in DOS-based systems (running on Windows machines, at least) until 2005, had to find a job cross-country, and tell "no" firmly a number of times, to finally get rid of them and force the clients to move on.
Clipper/xBase systems that do their job is one niche of MS-DOS apps that refuse to die. Another niche I know of, is gambling machines (one-armed bandits), for some reason people kept using DOS (at least they moved to FreeDOS by early 2000s).
As someone that wrote Clipper applications during the early 90's, there are few systems that can match its elegance for CRUD driven applications.
Doing DB operations is relatively simple, formdata entry and validation is super easy, supports modules, OOP and even compiles to native code.
Yeah not good for networked operations, but in many mom and pop shops it isn't needed anyway.
Even VB/Delphi for all their RAD capabilities were already much more complex than Clipper, but they messed up their migration into the GUI world and CA Visual Objects was too complex for the Clipper crowd.
Spot on with Clipper. I used it to write software for a mom and pop shop and it worked fantastic until I started running out of memory. We transitioned from DOS/Clipper to OS/2 with VX-REXX at that point. VX-REXX was also equally impressive.
In all honesty my two responses would be COBOL and simplicity-focused PHP/HTML5/{SQLite/MySQL}.
COBOL/FORTRAN/Prolog/etc type applications are out there, apparently, and if you're in the right place at the right time you can entrench reasonably effectively, at least short term.
However, these types of jobs also have their own unique pain points. This recent article (they surface reasonably frequently on here), and the comments, make for good reading: https://news.ycombinator.com/item?id=25148840
As for PHP+et al, I recommend these for their overall simplicity, relatively low learning curve, and ability to scale "well enough", for above-average values of "well enough". PHP itself isn't the most ideally designed language, but it can actually get things done. Furthermore, its reputation as a "simple" or "stupid" language, like eg VB6, can also serve to catalyze the upper bound of expected complexity of a particular solution. This has the downside of maybe a bit of isolation from truly interesting challenges, but allows for a slower/relaxed and perhaps more maintainable pace. (And there's nothing stopping you building something Facebook-sized if you needed to.)
On the client side, HTML5 is expansive and... very backward compatible with how things have been done for the past 20 years. Elitist forum commentators may make snooty noises about your use of tables, but Chrome won't. JavaScript could be a harder language to reason about, especially for simple enhancements like fetching bits of data which won't require learning pages of theory first.
Finally, my recommendation to focus on the Web is that, well, once you grumble and context-switch to "ugh, fine, HTML+JS", everything else can roll in incrementally as you go along.
With something stuck in the 80s/90s, sure, you guarantee that you're completely isolated from the firehose... at the cost of being objectively less economically viable in industry.
Not sure how much xBase code still runs out there to be maintained or rewritten. Depends on your market.
At least here, the businesses that still ran xBase were the ones unwilling or unable to pay decently for maintenance and/or migration, and that was the situation 10+ years ago.
Haven't looked into Harbour featureset, but I feel xBase in general is simply too simple for new developments. Even mom-and-pop stores need features way beyond the Clipper/MS-DOS capabilities.
The language is outdated, too; even Delphi/Lazarus sounds too 1990ish to me, I have fond memories of Delphi and Object Pascal but it is impossible to justify using them in a world with so many good new languages.
There are 2 aspects you would need to consider - the toolset and the product market. I don't think Clipper is sold anymore although there are some alternatives (Harbour). I think the products that still use Clipper would likely either be POS or a niche utility use. I don't know the feasibility of new entries in those markets.
No, it crashes like DOS. I suppose if your application is well written and you have fully exercised the functionality that you use, it might be more stable, but "reliable" is not a word I would have associated with those systems.
If you a write a simple protected mode monitor and debug it well, your app (running under the abovementioned monitor) will be more stable than if it were running on Linux, Windows or other monstrosity.
Indeed it doesn't crash, just hangs in some strange way that forces a reboot, or if you are lucky it affects the beeper that starts issuing random beep noises with different kinds of volume until you reach for the power button running like crazy,
Serious question from back in my young impressionable days of What Even Is This Square Thing Anyway:
I once wanted to copy/record the layout of a particular PC's boot up sequence, and (without a camera) got the great idea (uh oh) to use the Pause key while staring at the BIOS screen.
I think... I think it worked too well.
Once I'd copied the layout onto paper and hit a key to un-pause the system, it started behaving... very weirdly. I don't remember exactly what it did, but I do remember getting really freaked out because something was very wrong.
What I do remember was playing a game (to test/see if the system would settle back down) and, as time went on, keyboard input became progressively less and less responsive and erratic, and the system started beeping like crazy, almost like the keyboard input buffer had indigestion.
IIRC, the machine hung shortly thereafter and would not turn back on (read: initialize video and POST). O.o
To this day I still wonder what happened to that poor box, and a part of me wants to buy another AT&T Globalist 515 and film it to see if the fault(?) wasn't specific to my machine and maybe figure out exactly what happened.
My initial theory was flash corruption; it's also possible I overheated the CPU, especially given the lack of a heatsink, but it was a 486DX2-66, which didn't need a heatsink, and I'd used the Pause key in DOS plenty of times prior with no issues.
Amen! I never had a problem that wasn't bad hardware or my fault. All MS-DOS did was manage the disk, and keep track of time. There wasn't much to go wrong.
I suspect that they're alluding to the fact that DOS was more of a program loader than an OS and that once it loaded your program it was your program that was in the driver's seat. BUT there are subtle issues with drivers and networking still.
I know that, but unless you can 100% guarantee that the application was clean from any kind of DOS interrupt calls, there are zero guarantees that it wasn't DOS, one of the drivers or the pseudo multitasking TSRs that were the culprit, there is no way to assert that the application was the one at fault.
DOS was simple enough that it can be considered essentially bug free. And in an embedded setting there are not going to be TSRs (you just boot directly into the application) and probably no DOS calls during the runtime. You would only use disks for special operations such as software updates.
Ah such a lucky one, never crashing DOS code, I guess you never used MS-DOS 4, nor had the experience of porting code across PC-DOS, MS-DOS and DR-DOS.
Some DOS versions were better than others. But often the cause was non portable code on the applications. That wouldn't apply if the DOS version is frozen.
Consider yourself lucky. I had a spate of BSODs a couple months ago presumably after some combination of Windows 10 updates and Lenovo updates on my work-issued ThinkPad. Rolling back updates didn’t seem to help. Finally went away after some other update installed. After the 5th one in maybe 10 days I was ready to have the machine wiped and reimaged (if I didn’t smash it with a sledgehammer first).
Legacy systems exist in all walks of life, for all kinds of systems. They've solved the problem they were set up for, and so even maintaining them is often something management can be unwilling to set a budget for.
There's an oft shown, but rarely sourced, image suggesting that the US Navy food service management system was running on MS-DOS in 2011, and if that is accurate, I can't find any suggestion they have since changed that situation.
If it works, has all necessary features, no significant bugs, and still performing all functions - why change? Especially since the change to modern technologies will not guarantee to produce a better result in terms of features and stability. The change is necessary if the system unmaintainable, buggy, or can't be easily interfaced with other modern systems.
> The change is necessary if the system unmaintainable, buggy, or can't be easily interfaced with other modern systems.
Unfortunately, all of these are frequently true of legacy systems. Only a small number are relatively bug free. Most are simply unmaintained. Few are easy to interface with.
I have no particular problem with old software and hardware. But the massive expenditure leading up to Y2K is demonstrative of the legacy problem. Let it rot until an impending crisis forces your hand, costing far more than if you had simply maintained it.
Unless you want to end up with Adeptus Mechanicus worshipping the Machine God in the MS-DOs boxen, you need to change from time to time. You need to at least be able to open the metaphorical box and see if you can reproduce it in case of disaster recovery.
The more important the system, the higher the existential threat for the organization if you just let it be.
I do understand that it's a thankless job, so this needs to be enforced from up high.
The whole discussion is about legacy systems. What about sustainability? Of course it's cheap to build ever more powerful systems and to ruin the planet with more energy consumption and more waste. 32 bit systems are good enough for many purposes. By law of physics they could be built using less resources during production and during their lifetime.
Don't tell me it's such a small change that it does not matter. That's the attitude everywhere, and the results can be seen. Those living at the US west coast experienced the fires, not to mentio Australia. Myself I used to ski 3 months 20 years ago, maybe 1 month in a warm winter. Now I hardly get over 3 weeks, less than 1 week in a warm winter.
Consider that CPUs on newer nodes consume less energy. There are special embedded low-power chips (eg TI 16 bit MSP), but they are NOT running Linux, and they never will. They couldn't reach their power targets running Linux, not even close. So I'm not sure obsolete designs being manufactured on old process node being kept alive is a net win. The special low-power cores are unaffected, as they are not running Linux anyway, and the ones that do aren't particularly low power.
There is a sweet spot for low cost chips, I think its 40nm due to mask costs, manufacturing gets exponentially more expensive from that point on, which is the reason they haven't upgraded. Anyone knows if there are signs this cost is coming down over time?
> Consider that CPUs on newer nodes consume less energy.
I think the main point is to use 32bit on new nodes too. Using 32bit instead of 64bit is always more energy efficient unless perhaps you really need 64bit for your application, which most applications don't.
This is a good point. I think this won't happen though, because smaller nodes are that much more expensive, and the market for energy-efficient 32bit CPUs isn't nearly large enough to fund development. There likely won't be another ARMv7.
What I think will happen is that the huge amount of money that goes into developing mobile phone SoCs, which are clearly trending towards 64Bit while still requiring low power consumption, will cause that technology to trickle down and be used in ever lower cost markets as time passes and those initial costs have been recouped. The Apple M1, as a system, is clearly crazy power efficient, even at 5nm.
And again, if you need an ultra-low power sleep mode, because you only wake up once a day to send some sensor data out, its probably not running Linux in the first place.
I think you get into practical problems at some point. WikiChip lists a 28nm Cortex-A7 core at 0.48mm², with every node shrink (20nm, 14nm, 10nm, 7nm, ...) you can halve that, but adding 64-bit support to a core might only add about 10%.
You can't make chips arbitrarily small, because you end up wasting more of your wafer for the area between the dies and for the wire bond pads. If the chip size stays the same but the CPU gets smaller, most of the chip is for off-core components and the CPU has less impact on the total cost and power consumption.
You could use more complex CPU cores, or simply more cores, for better performance, but then you also need a faster memory interface (wider buses, LD-DDR4+ instead of DDR3, ...) to make actually use of the performance. These lead to higher memory capacity as well, but then you can't actually use the available memory as you run into 32-bit addressing limits.
I wouldn't rule out shrinks of existing 32-bit SoC families to 22nm or below to lower cost once those processes get cheap enough and there is still demand for compatibility, but there is a good chance that 28nm SoCs is where 32-bit ends. (note that I'm not talking about the dozen or so additional ARC/Xtensa/RV32/Cortex-M/... microcontroller cores on high-end SoCs, as those are not the ones running Linux).
I'm honestly curious how that plays out in practice. 64b Intel CPUs can run in 32b mode. If we count actual switching then we would end up with ~same usage. But in 64b mode it gives us both longer pointers and offsets it with fewer instructions on vector operations.
I could not find any actual paper with real use case comparison. Have you got one?
I haven't got a reference, and would be interested in one too.
Sure, if you use any kind of 32bit mode then you'd use less energy in theory. But if we leave the responsibility to the software developer, then we all know what happens :)
That sweet spot for cost has already moved from 40nm to 28nm for most of the market. The new SAM9X60 I mentioned is still on 40nm, but it's a tiny chip and all new Cortex-A7 SoCs and things like the Ingenic X2000 are on 28nm because of overall cost for that design point.
The sweet spot for power efficiency has apparently moved from 28nm to 22nm, which uses less energy than either 28nm HKMG or 14nm FinFET and is also closing in on 28nm on cost.
There no mainstream 32-bit cores on 22nm or below (yet), so there is a good chance that the coming generation of low-cost 64-bit SoCs on 22nm will beat all 32-bit chips on performance, power consumption and cost.
> Consider that CPUs on newer nodes consume less energy.
that's not entirely true. they consume less energy in run mode, but in deep sleep the leakage currents are higher than with larger nodes. this is relevant for ultra low power applications, but those don't run linux anyway.
There is a real irony that the same post simultaneously complains about a CPU using slightly more power and also about not being able to ski for three months a year any more. Ski resorts and the attendant infrastructure like lifts and piste preparation machines are prodigiously expensive in terms of energy used per person.
I was not talking about downhill skiing. I have not done more than 5 hours of downhill skiing during the last 20 years. (Did a bit more when I was younger.)
I do only cross-country skiing.
Could also call it an irony that if mention a concept with 2 variants, the reader automatically assumes the more commercial, the more industrialized, the more polluting variant today.
> What about sustainability? Of course it's cheap to build ever more powerful systems and to ruin the planet with more energy consumption and more waste.
Are you sure that running two production lines and supporting both of them long term is more sustainable than killing one of them?
> By law of physics they could be built using less resources during production and during their lifetime.
If you want to go that way, why not even more efficient arm cortex?
I'm not sure what you're trying to say. Regardless of the number of existing manufacturers, different architecture means more designers, validation, production pipeline and other inefficiencies. And that's best case of using the same wafers as your other products and not building a separate literal production line.
> 32 bit systems are good enough for many purposes
For all purposes actually in my case. The best notebook I've ever bought was/is the HP EliteBook 2530p; it runs 12 hours on battery, more than I need for my work day. I bought a new one in 2009 and a couple of used ones and parts since then. Most of my software and firmware development happens on this machine (Linux i386; I also have to support macOS though). It always works, never had downtime (I can replace parts within minutes if need be). Even Netflix works. And of course also all the embedded systems I produce are 32 bit, about half of them with Linux (the others are microcontrollers).
> I've often noticed -- and not just on here -- that when commenters prefix "just" to a sentence, the work involved is usually non-trivial.
That was the joke, yes. The post I was responding to was complaining about "forced" obsolescence while there is nothing of the sort. Upstream maintainers want to drop some legacy code so they can spend more time on newer machines. GP apparently does not want to upgrade and is now complaining that other people don't do the non-trivial work of keeping 32 bit linux updated for free.
Maintaining a 32bit Linux fork which includes just security patches and (critical) bugfixes would actually be a lot less work than keep maintaining 32bit support for all new feature development. Maintaining a "32bit LTS" fork of Linux like this would probably be pretty viable.
Keep in mind that in this space, "security patch" might come in the form of rewriting the architecture specific virtual memory implementation... https://lwn.net/Articles/741878/
You'd hope that we're finished with working around all the CPU bugs now, but yeah, it's not necessarily trivial, and the longer the fork exists, the harder it will be to backport patches. But I can see how you can build a business maintaining a Linux LTS (there are businesses offering Rails LTS, Microsoft offers updates for Windows XP if you pay), so there's some precedent here.
It's also nontrivial for the people currently maintaining the Linux kernel. I interpret the "just" as a way to hint that forcing someone else to keep doing something for free, you could just do it yourself.
You could argue the same for 16 bit CPUs. If the vast majority of CPUs sold are 64 bits, we will to a point (we are already halfway there) where 64 bit CPUs could be cheaper (more competence and variety) and even more energy efficient.
While 16 bit CPUs are good for many purposes, these are not the same purposes where you have run Linux ever and where 32 bit is currently being replaced by 64 bit. So the analogy is not helpful.
Maybe 64 bit CPUs can get more resource efficient than older 32 bit models. But that's a misdirected use of resources if you use such technolgy to make 32 bit CPUs obsolete. With the same technology 32 bit will always use less resource-demanding than 64 bit. And the claim was that 32 bit has so many use cases (basically all phones and many PCs) that economics of scale are possible.
Of course the nonsense model that you don't pay for Internet services directly, but use Javascript and a lot of other bloat software to run advertisements on your phone to pay via a massive detour is not an energy-efficient approach.
It doesn't need to mainlained in the same Linux upstream to be serious.
I am not familiar with it to say whether it really wasn't that useful for inherent reasons or whether vendors in that space were just more lucky to defend their proprietary solutions than the (mainline) Linux competitors.
Nitpick/unclear area: why doesn't this highlight the legacy of the 386 and the i386 architecture?
> Linux was first written as a desktop system for IBM PC compatibles, ...
> The earliest i386, ... processors all got phased out over time, ...
> The table shows the 32-bit desktop platforms that proved popular enough to make it into mainline Linux and stay there, supported mainly by loyal hobbyists:
> ...
> Platform: IBM compatible PC
> Architectures: ia32
> Earliest supported machine: PS/2 model 70 486
Wha...? Where is i386 in the table?
"Nuke 386-SX/DX support" was submitted to 3.8 in Dec 2012 (http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html), but explicit support existed up to that point since Linux's inception in 1991. I mean, Linux was written for the 386! That was the chip you had to be using since Linux's memory manager was built on top of its MMU.
I accept I may be completely misreading/misunderstanding something here.
Although I realize the table and most of the article is primarily concerned with 32-bit CPUs, it's unfortunate that the article fails to mention 64-bit DEC Alpha, which was the second architecture after i386 supported by Linux. And unlike i386 it's still supported.
When the article says
> Linux was first written as a desktop system for IBM PC compatibles, and was eventually ported to almost every other desktop platform available in the 1990s, including a lot of the early Unix workstations across all architectures.
Because of the juxtaposition with the subsequent platform table, I think a lot of people might be misled into think that both 64-bit CPUs and 64-bit Linux was late to the game. But 64-bit Linux existed as a first-class environment before it supported any other 32-bit architectures like MIPS, PowerPC, or SPARC; before Linux gained any commercial attention (DEC notwithstanding); and AFAIU before there was any kind of serious commitment to syscall ABI stability.
GCC supported "long long" on 32-bit architectures since at least 1989, beforeLinux even existed.[1] I think some Unix environments supported 64-bit off_t on 32-bit architectures before Linux was ever ported to Alpha, and possibly before Linux existed. In any event, using long long for off_t was a done deal by January 1995 at the first Large File Summit (LFS) meeting. The first two slides at https://web.archive.org/web/20000903063117/http://ftp.sas.co... suggest using long long for off_t was assumed; the year-long summit was apparently concerned with all the other details. That suggests long long was a relatively well worn and mature type in Unix C environments.
Moreover, Linux didn't even get LFS support until several years after the LFS POSIX API was formally published in 1996.[2] It could have implemented a similar solution for time_t and time-related syscalls at the same time it did for off_t and file operations. LFS support has been the default for such a long time even on Linux that today nobody even questions whether 32-bit binaries support 64-bit file operations. Instead Linux apparently waited another 15+ years just to implement that exact solution.
In other words, the potential to recognize and address the representation of time_t--and 64-bit types on 32-bit architectures more generally--existed basically from the very beginning of Linux and only grew more pressing and easier to implement, at least in terms of toolchain support, with each subsequent year. That's noteworthy. Not because it proves the Linux developers were mistaken or had the wrong priorities, but because it shows they weren't blindsided by the issue. The history can provide guidance for people to navigate similar dilemmas today, regardless of whether one would have left the issue on the table until later in the day, like Linux did, or chosen to address it earlier on.
[1] In 1989 GCC supported long long by calling into a library, similar to what it does for some intrinsics when not natively supported. I'm not sure when GCC began open coding 64-bit operations (i.e. generating inline assembly for all operations on 64-bit types).
I tried not to talk too much about 64-bit architectures, but that's a good point. Indeed not just Alpha but also MIPS R4000, UltraSPARC, PA-8000 were already around in the 1990s when the 32-bit kernel support got merged, while ppc64, s390x and TileGX hardware came a little later than the corresponding 32-bit ports.
For time_t, I remember it first getting discussed seriously among kernel developers around 2010 as it became clearer that we had misjudged how long 32-bit systems would be around for, how much work it would be to fix it, and how long before 2038 things would fall apart from bugs.
Before then, the general thinking was we could either delay dealing with it until the 2030s or 32-bit Linux would just not be there any more.
I would consider the LFS transition a user space failure, as glibc never made that the default and Debian still builds some packages with 32-bit off_t to avoid breaking the interface between libraries. As late as 2018, glibc was still merging architecture ports (csky) that default to a 32-bit off_t, despite the kernel having 64-bit off_t since 1994 (linux-1.1.46) before any non-i386 ports were added, and all new architectures in the kernel only supporting 64-bit off_t since 2011.
The article mentions 32-bit Raspbian which is moving to 64-bit to better support the Raspberry Pi 4 with 8 GB of RAM. Both the RPi 3 and 4 have 64-bit support but earlier models, including the popular and still-on-sale Zero, do not support 64-bit. This platform will likely stick around for a long time; we need to continue to fix issues like this size_t/off_t one in s3fs:
We just got a raspberrypi 400 in the mail yesterday to use is a media center pc. From what I understand reading things, Netflix will only work on 32bit because ChromeOS is 32 bit and they steal the Widevine blob out of ChromeOS to make Netflix and Amazon Prime work.
The other kind of lame consequence of that is I have to use Chromium rather than Firefox, which irks me just a little.
Other than that, the RaspberryPi 400 is a really nicely put together little machine.
Not that it matters tons for a media center, but, for general knowledge:
Chromium variants with the Google removed or deactivated are both more secure AND private.
I think 32-bit is still recommended on most Rpi 4 installs, because of greater software compatibility (across the established ecosystem). So it's going to change slowly.
Applications that would have difficulty moving to a 64-bit time_t, typically for object-size and alignment in legacy protocols, can often be recompiled to use an unsigned 32-bit value there, taking them up into 2106. So, a 32-bit kernel, typically forked from 2.6 or 3.x and with unsigned 32-bit time_t, is a viable alternative for rescuing legacy embedded uses.
Probably by 2106 the machines will be responsible for coding, and it will then be their problem; or, civilization might collapse, so any people remaining will have bigger worries. Or (best case) both.
I think a core problem is that merely changing the type and recompiling it is likely to cause other issues with code that is using time_t--for right or for wrong--to store the result of subtracting two other time_t values; it also could easily be that a piece of software just sort of looks like it works after that edit but doesn't die to some less-used corner case.
(And, FWIW: my expectation is that civilization will have collapsed because of the machines taking over, and the only thing that will save humanity is when they all keel over and die on February 7, 2106.)
You go through and subtract 50 years from all date values you have in storage, change the date display code to show everything based on 2020, and you've kicked the can on the problem down to 2088.
Because you're breaking backwards compatibility by changing the interpretation of those timestamp fields. If you do that, you might as well switch to a 64-bit time_t.
Treating a 32-bit timestamp field as unsigned seconds since the epoch at least keeps backwards compatibility with existing software up until 2038.
For handling legacy protocols and file formats, can't you just use uint32_t for the timestamp, and then convert that to time_t for the in-memory representation. No need to fork the kernel.
That's probably what you needed to do anyway, if you want the code to compile correctly on systems with 32 and 64-bit time_t.
On the other hand, MS-DOS, which internally uses a 1980 year-epoch with a byte-sized year counter (and does not use a seconds-based epoch unlike Unixes), will be fine up to 2107, and Windows NT's timekeeping has a range of approximately 30000 years. Despite the elegance of the 1970 seconds-epoch, it's interesting to see how other systems designed roughly around that time period, a little later but not by much, chose to do something else, and as a result became quite a bit more future-proof.