It is well understood that many 32-bit systems will stop working in the year 2038 when the 32-bit time_t overflows.
On the other hand, MS-DOS, which internally uses a 1980 year-epoch with a byte-sized year counter (and does not use a seconds-based epoch unlike Unixes), will be fine up to 2107, and Windows NT's timekeeping has a range of approximately 30000 years. Despite the elegance of the 1970 seconds-epoch, it's interesting to see how other systems designed roughly around that time period, a little later but not by much, chose to do something else, and as a result became quite a bit more future-proof.
The time_t limit in Linux is inherited from UNIX, which baked it in circa 1970, back when it was developed on PDP-8s (manufacturing lifetime: about 50,000 machines total) so Ken Thompson could write a space wars game on an unused machine with no OS. And then migrated to PDP-11s circa 1972 when someone at Bell Labs wanted a typesetting system.
Back then, digital computers were only 25-28 years old. There was no such thing as legacy hardware back then, in our terms. Designing for a 68 year lifespan -- nearly three times the entire duration of the industry to that date -- probably seemed excessive!
Nit: PDP-7, not PDP-8. Cost 4 times as much and sold only 120 units, according to Wikipedia. Looks like 12-bit PDPs were a lot more affordable than 18-bit ones.
MS-DOS was released in 1981, UNIX considerably before then. Windows NT was not released until 1993. A decade has always been a long time in software, and it used to be longer.
GP's point still stands. MS-DOS fits that range without 64-bits, a separately correct design decision would have been to use 64-bits of time from the start. Poor software casts/stores time_t in int because that was good enough for decades - if it didn't fit an int from the start, they wouldn't be doing that (or if an int were enough by having different semantics, it would also be OK for them to keep doing that).
> GP's point still stands. MS-DOS fits that range without 64-bits
32-bit Unix timestamps would have a broader range than MS-DOS timestamps if they made the same sacrifice in resolution and only had 2-second precision rather than single-second precision [1]. And if Unix timestamps additionally used 32-bit unsigned numbers to sacrifice the ability to represent timestamps before the beginning of the epoch, then the year 2038 problem would instead be more of a year 2242 problem.
I work with a fully POSIX-compliant OS that uses 32-bit unsigned int time_t. Unfortunately, most software these days (including the runtimes of major production languages) assume incorrectly that time_t is an unsigned int because that's what it is in the GNU C library used on GNU/Linux, and they end up using it as a general arithmetic type. The end result is undefined behaviour. But hey, it just happens to work on Linux so it must be correct. Grrr.
Why would it result in undefined behavior? Overflow behavior is only defined on unsigned types.
Also, why should people care about your special snowflake OS? The C standard doesn't even specify that time_t has to be an integral type, but anyone who expects code to cope with a floating point time_t is deluded. Why is your case any different?
My special snowflake OS is embedded in millions of devices all over the world in safety-critical applications. It's not broken, it's the third-party software that incorrectly assumes time_t is a general integer arithmetic type that's broken.
The ISO C11 standard only says that time_t is a real type that holds values >= 0. The minute you start performing arithmetic on that type and checking if the result is less than zero you're entering undefined behaviour territory. For example, does the following program halt?
int main() {
for (unsigned i = 1; i >= 0); --i)
;
}
The answer is "maybe". The C standard defines an infinite loop like this as undefined behaviour ISO 9899:2011 [6.8.5/6].
> a separately correct design decision would have been to use 64-bits of time from the starta separately correct design decision would have been to use 64-bits of time from the start
On a 16-bit machine when memory cost close to a dollar a byte?
what percent of memory do dates usually use?
Even in a spreadheet, it wouldn't double needed memory
at worst it would probably be signficantly less then 10% extra memory
A PDP-11/20 could address (without extensions) 32 16-byte kilowords, or 64 kilobytes, of RAM. Most machines shipped with twelve kilowords, or 24 kilobytes, of RAM.
Tell me again how free the creators of Unix should've been with it.
Systems which can't update their libc or other system software have already broken in various ways. For example, in 2007 the Energy Policy Act of 2005 changed the date when DST begins and ends. Any system unable to update its libc data files since about 13-15 years ago will display the incorrect time. I have a cheap $10 radio clock which I've had for two decades. It always displays the wrong time for a few weeks around the DST change.
Any system which needs to display the correct time needs to have a mechanism for updates.
The time_t wraparound is a bit deeper and more serious, granted.
Sure, but what percentage of those systems will be alive in 18 years? I'm sure it is non-zero, but it is also probably small. Regardless, those systems can't be updated, so they will have the problem either way. That says nothing about whether new systems should be given 64 bit time stamps.
Is this correct? 32-bit will be dead in 2038? This is making me really concerned... Won't it just loop around? I still use a 32-bit laptop as my main writing PC... Acer Aspire One, from the age of netbooks. Even finding an up-to-date Linux distro was kinda hard... Is there a fix for the time_t?
The article has noted the kernel will be fine, but if I read that correctly, glib still uses 32bit time_t. And I thought I saw somewhere ext4 was changed to be fine until sometime around 2160 due to seconds granularity. I wonder how much this is a function of "never break user space." ?
The BSDs have no problem with breaking user space, Net/OpenBSD has been using 64bit time_t for a while. And starting with 6.8 OpenBSD's default file system uses 64bit timestamps.
With that said, I think Linux's "never break user space" is one of the main reasons Linux is so dominant. And that is making the time_t work harder to complete.
User space doesn't generally deal well with time going backwards, but the main problem is that setting an absolute timeout in a program is broken when the target time is less than the current time. An example of this is 32-bit systemd, which just hangs at boot when the RTC points to a time after 2038, as it tries to figure out of timers work.
https://www.adelielinux.org/ is an example of a Linux distro built on musl-1.2, so you can use that beyond 2038 on 32-bit hardware.
> No, it will not fail. In the worst case, from the viewpoint of a programmer it will work as expected: it will be reseted to date 1901-12-13 20:45:52:
Thank you.
For writing it's still amazing and if it still works in 18 that would be amazing, I did just buy a bigger longer battery via eBay! I don't know why but the keyboard feels so amazing... It's like a part of me, I don't even need to think of typing yet the words still appear! I've also put SSD storage into it and upgraded the RAM... It really just makes me very happy
Not at all. Simply interpret time_t as unsigned, recompile and it works until 2106.
There's no need for unix timestamps before 1970 anymore.
There are also other replacement tricks, as in perl5.
You'll find a large chunk of software breaks because it incorrectly uses time_t as a generic arithmetic type. Taking the diff of two unsigned integers and checking if the result is less than zero will end up making you weep.
I wouldn't be surprised at all if there were some original embedded systems still running machinery and the like in a century from now, catastrophic events notwithstanding. The sort of low-power embedded systems likely to be running DOS don't really experience any wear. There's quite a bit of equipment in the "low-level" manufacturing industry which is around a century old, and has basically been in continuous operation since it was installed.
Modern x86 PCs still can boot DOS, no? (If not, sounds like a bug.)
The bigger issue would be ISA control cards and the like. A HN user recently posted a link of "new" ISA boards for sale, but the latest CPU support was Pentium 4 (about 20 years ago).
(I wish they would make "ultimate" mobos with all the interfaces in between too --- I mean if ISA with a Coffee Lake CPU is possible, how about a pair of IDEs, floppies, and parallel ports too...)
Are there PCI IDE+floppy+parallel(+serial+joystick+PS/2+etc) controllers out there that are bug-for-bug indiscernible from the controllers mounted on motherboards?
I think most (if not all) modern PCI-E GPUs still supports this. If I am not mistaken, this is what is preventing them from working on ARM right now (Raspberry PI 4, Apple M1, etc.), because the driver expect specific BIOS features
Aaaand that's why X11 has an x86 (v86) emulator in it, so Suns (circa late 90s/early 2000s) could use PC-focused graphics cards with BIOSes containg x86 initialization code.
The problem with such adapters is that being USB based they're timing is far from deterministic. One very common use case for ISA based interfaces is industrial automation where realtime determinism is very common requirement.
Even if you assume it can boot DOS, it doesn't mean it can run DOS. No DOS has drivers for a lot of modern hardware, and they're never going to be written.
> (If not, sounds like a bug.)
BIOS is buggy? Say it ain't so! Say! It! Ain't! So!
My ~4-year-old workstation -- with a pair of 8C/16T Xeons, 128 GB of RAM, and NVMe storage in a PCIe slot (not to mention everything else) -- boots into and runs FreeDOS just fine.
Additionally, over the last several years, I've used it on probably a dozen or more different "makes and models" of enterprise server hardware to update firmware (BIOS/HBA/NIC/SSD) and can't recall experiencing any major issues while doing so (fortunately, more and more of these bootable firmware update utilities are now Linux-based!).
That's with FreeDOS 1.0, by the way. A quick check shows that it was released in 2006, making it a fair bit older than the servers I most recently used it on. For what it's worth, FreeDOS 1.2 was released four years ago and the third release candidate of 1.3 was just released this past summer. It seems that FreeDOS, at 26, is still alive and, in fact, doing quite well!
To be clear, I've not actually attempted to boot any of the other DOS systems in, well, quite a long time -- at least 10 years for MS-DOS and likely 20 years or more for DR-DOS and PC-DOS -- as I haven't had any reason to. FreeDOS has been a superior replacement since before the turn of the century.
In my case, FreeDOS is booted via (i)PXE and the "hard disk" (image), which contains the various firmware images, utilities, etc., is downloaded via HTTP and loaded as a ramdisk. In those rare instances when it's actually been needed, it is just so absolutely useful.
Honestly, I'd be a bit surprised if MS-DOS (a.k.a. PC-DOS) did not (for the most part) also "just work"(TM) ...
... and, now that I've said that, I'm fairly confident that MS-DOS 5.0 (1991, IIRC?) is also available on the netboot server. I can't confirm that at the moment but (assuming it is) now I'm quite curious so I'll try to remember to try booting my workstation into it the next time I happen to restart it.
Or in the industrial sector: Milling machines running Windows 95/98 (fun times getting it to talk with windows 10 on the network) or working with floppies are still around: Those beasts are too expensive to not run until dead.
Once a guy told me that MS-DOS would be alive in 2000, I laughed at him. It was 1994 or so. Worked in DOS-based systems (running on Windows machines, at least) until 2005, had to find a job cross-country, and tell "no" firmly a number of times, to finally get rid of them and force the clients to move on.
Clipper/xBase systems that do their job is one niche of MS-DOS apps that refuse to die. Another niche I know of, is gambling machines (one-armed bandits), for some reason people kept using DOS (at least they moved to FreeDOS by early 2000s).
As someone that wrote Clipper applications during the early 90's, there are few systems that can match its elegance for CRUD driven applications.
Doing DB operations is relatively simple, formdata entry and validation is super easy, supports modules, OOP and even compiles to native code.
Yeah not good for networked operations, but in many mom and pop shops it isn't needed anyway.
Even VB/Delphi for all their RAD capabilities were already much more complex than Clipper, but they messed up their migration into the GUI world and CA Visual Objects was too complex for the Clipper crowd.
Spot on with Clipper. I used it to write software for a mom and pop shop and it worked fantastic until I started running out of memory. We transitioned from DOS/Clipper to OS/2 with VX-REXX at that point. VX-REXX was also equally impressive.
In all honesty my two responses would be COBOL and simplicity-focused PHP/HTML5/{SQLite/MySQL}.
COBOL/FORTRAN/Prolog/etc type applications are out there, apparently, and if you're in the right place at the right time you can entrench reasonably effectively, at least short term.
However, these types of jobs also have their own unique pain points. This recent article (they surface reasonably frequently on here), and the comments, make for good reading: https://news.ycombinator.com/item?id=25148840
As for PHP+et al, I recommend these for their overall simplicity, relatively low learning curve, and ability to scale "well enough", for above-average values of "well enough". PHP itself isn't the most ideally designed language, but it can actually get things done. Furthermore, its reputation as a "simple" or "stupid" language, like eg VB6, can also serve to catalyze the upper bound of expected complexity of a particular solution. This has the downside of maybe a bit of isolation from truly interesting challenges, but allows for a slower/relaxed and perhaps more maintainable pace. (And there's nothing stopping you building something Facebook-sized if you needed to.)
On the client side, HTML5 is expansive and... very backward compatible with how things have been done for the past 20 years. Elitist forum commentators may make snooty noises about your use of tables, but Chrome won't. JavaScript could be a harder language to reason about, especially for simple enhancements like fetching bits of data which won't require learning pages of theory first.
Finally, my recommendation to focus on the Web is that, well, once you grumble and context-switch to "ugh, fine, HTML+JS", everything else can roll in incrementally as you go along.
With something stuck in the 80s/90s, sure, you guarantee that you're completely isolated from the firehose... at the cost of being objectively less economically viable in industry.
Not sure how much xBase code still runs out there to be maintained or rewritten. Depends on your market.
At least here, the businesses that still ran xBase were the ones unwilling or unable to pay decently for maintenance and/or migration, and that was the situation 10+ years ago.
Haven't looked into Harbour featureset, but I feel xBase in general is simply too simple for new developments. Even mom-and-pop stores need features way beyond the Clipper/MS-DOS capabilities.
The language is outdated, too; even Delphi/Lazarus sounds too 1990ish to me, I have fond memories of Delphi and Object Pascal but it is impossible to justify using them in a world with so many good new languages.
There are 2 aspects you would need to consider - the toolset and the product market. I don't think Clipper is sold anymore although there are some alternatives (Harbour). I think the products that still use Clipper would likely either be POS or a niche utility use. I don't know the feasibility of new entries in those markets.
No, it crashes like DOS. I suppose if your application is well written and you have fully exercised the functionality that you use, it might be more stable, but "reliable" is not a word I would have associated with those systems.
If you a write a simple protected mode monitor and debug it well, your app (running under the abovementioned monitor) will be more stable than if it were running on Linux, Windows or other monstrosity.
Indeed it doesn't crash, just hangs in some strange way that forces a reboot, or if you are lucky it affects the beeper that starts issuing random beep noises with different kinds of volume until you reach for the power button running like crazy,
Serious question from back in my young impressionable days of What Even Is This Square Thing Anyway:
I once wanted to copy/record the layout of a particular PC's boot up sequence, and (without a camera) got the great idea (uh oh) to use the Pause key while staring at the BIOS screen.
I think... I think it worked too well.
Once I'd copied the layout onto paper and hit a key to un-pause the system, it started behaving... very weirdly. I don't remember exactly what it did, but I do remember getting really freaked out because something was very wrong.
What I do remember was playing a game (to test/see if the system would settle back down) and, as time went on, keyboard input became progressively less and less responsive and erratic, and the system started beeping like crazy, almost like the keyboard input buffer had indigestion.
IIRC, the machine hung shortly thereafter and would not turn back on (read: initialize video and POST). O.o
To this day I still wonder what happened to that poor box, and a part of me wants to buy another AT&T Globalist 515 and film it to see if the fault(?) wasn't specific to my machine and maybe figure out exactly what happened.
My initial theory was flash corruption; it's also possible I overheated the CPU, especially given the lack of a heatsink, but it was a 486DX2-66, which didn't need a heatsink, and I'd used the Pause key in DOS plenty of times prior with no issues.
Amen! I never had a problem that wasn't bad hardware or my fault. All MS-DOS did was manage the disk, and keep track of time. There wasn't much to go wrong.
I suspect that they're alluding to the fact that DOS was more of a program loader than an OS and that once it loaded your program it was your program that was in the driver's seat. BUT there are subtle issues with drivers and networking still.
I know that, but unless you can 100% guarantee that the application was clean from any kind of DOS interrupt calls, there are zero guarantees that it wasn't DOS, one of the drivers or the pseudo multitasking TSRs that were the culprit, there is no way to assert that the application was the one at fault.
DOS was simple enough that it can be considered essentially bug free. And in an embedded setting there are not going to be TSRs (you just boot directly into the application) and probably no DOS calls during the runtime. You would only use disks for special operations such as software updates.
Ah such a lucky one, never crashing DOS code, I guess you never used MS-DOS 4, nor had the experience of porting code across PC-DOS, MS-DOS and DR-DOS.
Some DOS versions were better than others. But often the cause was non portable code on the applications. That wouldn't apply if the DOS version is frozen.
Consider yourself lucky. I had a spate of BSODs a couple months ago presumably after some combination of Windows 10 updates and Lenovo updates on my work-issued ThinkPad. Rolling back updates didn’t seem to help. Finally went away after some other update installed. After the 5th one in maybe 10 days I was ready to have the machine wiped and reimaged (if I didn’t smash it with a sledgehammer first).
Legacy systems exist in all walks of life, for all kinds of systems. They've solved the problem they were set up for, and so even maintaining them is often something management can be unwilling to set a budget for.
There's an oft shown, but rarely sourced, image suggesting that the US Navy food service management system was running on MS-DOS in 2011, and if that is accurate, I can't find any suggestion they have since changed that situation.
If it works, has all necessary features, no significant bugs, and still performing all functions - why change? Especially since the change to modern technologies will not guarantee to produce a better result in terms of features and stability. The change is necessary if the system unmaintainable, buggy, or can't be easily interfaced with other modern systems.
> The change is necessary if the system unmaintainable, buggy, or can't be easily interfaced with other modern systems.
Unfortunately, all of these are frequently true of legacy systems. Only a small number are relatively bug free. Most are simply unmaintained. Few are easy to interface with.
I have no particular problem with old software and hardware. But the massive expenditure leading up to Y2K is demonstrative of the legacy problem. Let it rot until an impending crisis forces your hand, costing far more than if you had simply maintained it.
Unless you want to end up with Adeptus Mechanicus worshipping the Machine God in the MS-DOs boxen, you need to change from time to time. You need to at least be able to open the metaphorical box and see if you can reproduce it in case of disaster recovery.
The more important the system, the higher the existential threat for the organization if you just let it be.
I do understand that it's a thankless job, so this needs to be enforced from up high.
On the other hand, MS-DOS, which internally uses a 1980 year-epoch with a byte-sized year counter (and does not use a seconds-based epoch unlike Unixes), will be fine up to 2107, and Windows NT's timekeeping has a range of approximately 30000 years. Despite the elegance of the 1970 seconds-epoch, it's interesting to see how other systems designed roughly around that time period, a little later but not by much, chose to do something else, and as a result became quite a bit more future-proof.