Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A signed 16-bit integer rolls over after 32767, which is a little after 9 hours and 6 minutes (if we're tracking seconds). An unsigned 16-bit integer rolls over at 18:12 and change. Neither fits the observed 11 hour mark.


HW often used counters to represent multiple seconds so I was thinking something like a 32 bit counter representing the number of 10us elapsed which comes out to just over 11 hours.

Hard to say though since no obvious counter comes out to 11 hours. That being said, a counter of some kind going wrong would be the most obvious reason to experience an issue. Maybe there’s a WiFi frame counter that participates in ratcheting the key used to encrypt each frame or something.


Same issue.

A 32-bit unsigned integer would roll over after ~42949 seconds used as a 10 µs counter. That's just shy of 12 hours at 11:55 and change, not just over 11.


I worked on an embedded system for years that used that exact representation for time. The "fun" part is that it was used to represent real time since an epoch many years ago, and the field was compulsory. Not only would the values roll over every 12 hours, but the system would have to function for a while before even synchronizing to an external clock. Every so often, somebody asks why the next-gen system has four optional 64-bit time fields, and my eye twitches while I rant about time synchronization for half an hour. Then they ask how to compute the time difference between two independent events, and we're literally inside an XKCD comic.


I hope you ask them where the observer's frame is and how fast it's moving.


how about some floating point sizes?

then again, thinking about code with timestamps in float makes me scared


I once wrote a fuzzer for some code that serialized and deserialized a particular data structure that included a timestamp -- the complexity cannot be overstated.

Basically every possible edge case came up: floats can be infinity, negative infinity, negative zero, NaN, and even subnormal. The bulk of the problems occurred because we deserialized the float timestamp and proceeded to do unchecked math on it, then expected it to be a normal value.


What is the problem with subnormals? They're just numbers really close to zero


The way they are represented in binary is distinct, basically you need to add an additional edge case when you're decoding it.


I swear John Carmack said somewhere "Time should be a double that starts from 1 billion" or something, for games or VR or something.

Of course when I search on DDG I only get "wow the fast inverse square root"



How!?


Not sure how mrmincent did it, but when I searched DDG, it didn't come up with anything. I then switched to Brave Search and it was in the middle of the first page, just past the fold. I specifically searched for:

  john carmack time double


I seeing a screenshot in a programming subreddit showing a financial institution storing monetary data as floats, so I wouldn't be surprised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: