The way Google implemented leap seconds wasn't by sticking a 23:59:60 second at the end of 31st Dec. The way they did it was more interesting.
What they did instead was to "smear" it across the day, by adding 1 / 86400 seconds to every second on 31st Dec. 1/86400 seconds is well within the margin of error for NTP, so computers could carry on doing what they do without throwing errors.
Edit: They smeared it from noon before the leap second, to the noon after, i.e 31st Dec 12pm - 1st Jan 12pm.
Time is a mess. Always. The author only scratched the surface on all the issues. Even if we exclude the time dilation of relativity which affects GPS/GNSS satellites - independent of if it is due to difference in gravitational pull or their relative speed over ground, it's still a mess.
Timezones; sure. But what about before timezones got into use? Or even halfway through - which timezone, considering Königsberg used CET when it was part of Germany, but switched to EET after it became Russian. There's even countries that have timezones differenting by 15 minutes.
And dont get me started on daylight savings time. There's been at least one instance where DST was - and was not - in use in Lebanon - at the same time! Good luck booking an appointment...
Not to mention the transition from Julian calendar to Gregorian, which took place over many, many years - different by different countries - as defined by the country borders at that time...
We've even had countries that forgot to insert a leap day in certain years, causing March 1 to occur on different days altogether for a couple of years.
Time is a mess. Is, and aways have been, and always will be.
Author covers how IANA handles Königsberg, it is logically its own timezone.
An IANA timezone uniquely refers to the set of regions that not only share the same current rules and projected future rules for civil time, but also share the same history of civil time since 1970-01-01 00:00+0. In other words, this definition is more restrictive about which regions can be grouped under a single IANA timezone, because if a given region changed its civil time rules at any point since 1970 in a a way that deviates from the history of civil time for other regions, then that region can't be grouped with the others
I agree that time is a mess. And the 15 minute offsets are insane and I can't fathom why anyone is using them.
Yep. Fortunately, a lot of apps can get by with just local civil time and an OS-set timezone. It’s much less common that they need to worry about leap seconds, etc. And many also don’t care about millisecond granularity, etc. If your app does care about all that, however, things become a mess quite quickly.
- system clock drift. Google's instances have accurate timekeeping using atomic clocks in the datacenter, and leap seconds smeared over a day. For accurate duration measurements, this may matter.
- consider how the time information is consumed. For a photo sharing site the best info to keep with each photo is a location, and local date time. Then even if some of this is missing, a New Year's Eve photo will still be close to midnight without considering its timezone or location. I had this case and opted for string representations that wouldn't automatically be adjusted. Converting it to the viewer's local time isn't useful.
I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is.
This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.
Very nice write up! But I think your point that time doesn't need to be a mess is refuted by all the points you made.
I know you had to limit the length of the post, but time is an interest of mine, so here's a couple more points you may find interesting:
UTC is not an acronym. The story I heard was the English acronym would be "CUT" (the name is "coordinated universal time") and the French complained, the French acronym would be "TUC" and the English-speaking committee members complained, so they settled for something that wasn't pronouncable in either. (FYI, "ISO" isn't an acronym either!)
Leap seconds caused such havoc (especially in data centers) that no further leap seconds will be used. (What will happen in the future is anyone's guess.) But for now, you can rest easy and ignore them.
> other epochs work too (e.g. Apollo_Time in Jai uses the Apollo 11 rocket landing at July 20, 1969 20:17:40 UTC).
I see someone else is a Vernor Vinge fan.
But it's kind of a wild choice for an epoch, when you're very likely to be interfacing with systems whose Epoch starts approximately five months later.
That's kind of the point of software archeology, isn't it? Sometimes something so evident to people within the first few hundred years becomes opaque in reasoning later on, and what's 5 months anyway? You'd need a Rosetta stone to be sure you were even off in time, otherwise you just might have a few missing months that historians couldn't account for.
It’s quite different from how I think about time, as a programmer. I treat human time and timezones as approximate. Fortunately I’ve been spared from working on calendar/scheduling for humans, which sounds awful for all the reasons mentioned.
Instead I mostly use time for durations and for happens-before relationships. I still use Unix flavor timestamps, but if I can I ensure monotonicity (in case of backward jumps) and never trust timestamps from untrusted sources (usually: another node on the network). It often makes more sense to record the time a message was received than trusting the sender.
That said, I am fortunate to not have to deal with complicated happens-before relationships in distributed computing. I recall reading the Spanner paper for the first time and being amazed how they handled time windows.
I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is.
This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.
We don’t have much trouble yet with relativistic temporal distortions, but Earth’s motion causes us to lose about 0.152 seconds per year relative to the Solar system. Likewise we lose about 8.5 seconds per year relative to the Milky Way. I wonder when we’re going to start to care. Presumably there would be consideration of such issues while dealing with interplanetary spacecraft, timing burns and such.
GPS satellite clocks have to run fast to account for the combined relatavistic effects of moving fast and being significantly farther away from earth's gravity. Without this, they would accumulate around 11km of error per day from losing around 7microseconds per day compared to earthbound clocks.
Nice post. I think about time... all the time haha. There's another source you might enjoy (Re: your NTP and synchronization question) from TigerBeetle: [Implementing Time](https://www.youtube.com/watch?v=QtNmGqWe73g)
> What explains the slowdown in IANA timezone database updates?
My guess is that with the increasing dependency on digital systems for our lives the edge-cases where these rules aren't properly updated cause increased amounts of pain "for no good reason".
In Brazil we recently changed our DST rules, it was around 2017/2018. It caused a lot of confusion. I was working with a system where these changes were really important, so I was aware of this change ahead of time. But there are a lot of systems running without too much human intervention, and they are mostly forgotten until someone notices a problem.
It is a pet peeve of mine, but any statement that implies that Unix time is a count of seconds since epoch is annoyingly misleading and perpetuates such misconception. Imho better mental model for Unix time is that has two parts, days since epoch * 86400, and seconds since midnight, which get added together.
But I hate how when I stack my yearly weather charts, every four years either the graph is off by one day so it is 1/366th narrower and the month delimiters don't line up perfectly, or i have to duplicate Feb 28th so there is no discontinuity in the lines. Still not sure how to represent that, but it sure bugs me.
In a nutshell if you believe anything about time, you're wrong, there is always an exception, and an exception to the exception. And then Doc Brown runs you over with the Delorean.
The way Google implemented leap seconds wasn't by sticking a 23:59:60 second at the end of 31st Dec. The way they did it was more interesting.
What they did instead was to "smear" it across the day, by adding 1 / 86400 seconds to every second on 31st Dec. 1/86400 seconds is well within the margin of error for NTP, so computers could carry on doing what they do without throwing errors.
Edit: They smeared it from noon before the leap second, to the noon after, i.e 31st Dec 12pm - 1st Jan 12pm.
Time is a mess. Always. The author only scratched the surface on all the issues. Even if we exclude the time dilation of relativity which affects GPS/GNSS satellites - independent of if it is due to difference in gravitational pull or their relative speed over ground, it's still a mess.
Timezones; sure. But what about before timezones got into use? Or even halfway through - which timezone, considering Königsberg used CET when it was part of Germany, but switched to EET after it became Russian. There's even countries that have timezones differenting by 15 minutes.
And dont get me started on daylight savings time. There's been at least one instance where DST was - and was not - in use in Lebanon - at the same time! Good luck booking an appointment...
Not to mention the transition from Julian calendar to Gregorian, which took place over many, many years - different by different countries - as defined by the country borders at that time...
We've even had countries that forgot to insert a leap day in certain years, causing March 1 to occur on different days altogether for a couple of years.
Time is a mess. Is, and aways have been, and always will be.
Author covers how IANA handles Königsberg, it is logically its own timezone.
I agree that time is a mess. And the 15 minute offsets are insane and I can't fathom why anyone is using them.zoneinfo does in practice hold the historical info before 1970 when it can do so easily in its framework: https://en.wikipedia.org/wiki/UTC%2B01:24
Yep. Fortunately, a lot of apps can get by with just local civil time and an OS-set timezone. It’s much less common that they need to worry about leap seconds, etc. And many also don’t care about millisecond granularity, etc. If your app does care about all that, however, things become a mess quite quickly.
Two things that aren't really covered:
- system clock drift. Google's instances have accurate timekeeping using atomic clocks in the datacenter, and leap seconds smeared over a day. For accurate duration measurements, this may matter.
- consider how the time information is consumed. For a photo sharing site the best info to keep with each photo is a location, and local date time. Then even if some of this is missing, a New Year's Eve photo will still be close to midnight without considering its timezone or location. I had this case and opted for string representations that wouldn't automatically be adjusted. Converting it to the viewer's local time isn't useful.
I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is.
This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.
Very nice write up! But I think your point that time doesn't need to be a mess is refuted by all the points you made.
I know you had to limit the length of the post, but time is an interest of mine, so here's a couple more points you may find interesting:
UTC is not an acronym. The story I heard was the English acronym would be "CUT" (the name is "coordinated universal time") and the French complained, the French acronym would be "TUC" and the English-speaking committee members complained, so they settled for something that wasn't pronouncable in either. (FYI, "ISO" isn't an acronym either!)
Leap seconds caused such havoc (especially in data centers) that no further leap seconds will be used. (What will happen in the future is anyone's guess.) But for now, you can rest easy and ignore them.
I have a short list of time (and NTP) related links at <https://wpollock.com/Cts2322.htm#NTP>.
> other epochs work too (e.g. Apollo_Time in Jai uses the Apollo 11 rocket landing at July 20, 1969 20:17:40 UTC).
I see someone else is a Vernor Vinge fan.
But it's kind of a wild choice for an epoch, when you're very likely to be interfacing with systems whose Epoch starts approximately five months later.
That's kind of the point of software archeology, isn't it? Sometimes something so evident to people within the first few hundred years becomes opaque in reasoning later on, and what's 5 months anyway? You'd need a Rosetta stone to be sure you were even off in time, otherwise you just might have a few missing months that historians couldn't account for.
The absl library has a great write up of time programming: https://abseil.io/docs/cpp/guides/time
It’s quite different from how I think about time, as a programmer. I treat human time and timezones as approximate. Fortunately I’ve been spared from working on calendar/scheduling for humans, which sounds awful for all the reasons mentioned.
Instead I mostly use time for durations and for happens-before relationships. I still use Unix flavor timestamps, but if I can I ensure monotonicity (in case of backward jumps) and never trust timestamps from untrusted sources (usually: another node on the network). It often makes more sense to record the time a message was received than trusting the sender.
That said, I am fortunate to not have to deal with complicated happens-before relationships in distributed computing. I recall reading the Spanner paper for the first time and being amazed how they handled time windows.
I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is. This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.
We don’t have much trouble yet with relativistic temporal distortions, but Earth’s motion causes us to lose about 0.152 seconds per year relative to the Solar system. Likewise we lose about 8.5 seconds per year relative to the Milky Way. I wonder when we’re going to start to care. Presumably there would be consideration of such issues while dealing with interplanetary spacecraft, timing burns and such.
GPS satellite clocks have to run fast to account for the combined relatavistic effects of moving fast and being significantly farther away from earth's gravity. Without this, they would accumulate around 11km of error per day from losing around 7microseconds per day compared to earthbound clocks.
https://www.gpsworld.com/inside-the-box-gps-and-relativity/
Earth time <> Sol time <> SagA* time
Nice post. I think about time... all the time haha. There's another source you might enjoy (Re: your NTP and synchronization question) from TigerBeetle: [Implementing Time](https://www.youtube.com/watch?v=QtNmGqWe73g)
> What explains the slowdown in IANA timezone database updates?
My guess is that with the increasing dependency on digital systems for our lives the edge-cases where these rules aren't properly updated cause increased amounts of pain "for no good reason".
In Brazil we recently changed our DST rules, it was around 2017/2018. It caused a lot of confusion. I was working with a system where these changes were really important, so I was aware of this change ahead of time. But there are a lot of systems running without too much human intervention, and they are mostly forgotten until someone notices a problem.
I’m all about monotonic time everywhere after having soon too many badly configured time sync settings. :)
It is a pet peeve of mine, but any statement that implies that Unix time is a count of seconds since epoch is annoyingly misleading and perpetuates such misconception. Imho better mental model for Unix time is that has two parts, days since epoch * 86400, and seconds since midnight, which get added together.
How is it misleading? The source code of UNIX literally has time as a variable of seconds that increments every second.
leap seconds
Also, UTC had a different clock rate than TAI prior to 1972. And TAI itself had its reference altitude adjusted to sea level in 1977.
Glad OP discussed daylight savings nightmare.
But I hate how when I stack my yearly weather charts, every four years either the graph is off by one day so it is 1/366th narrower and the month delimiters don't line up perfectly, or i have to duplicate Feb 28th so there is no discontinuity in the lines. Still not sure how to represent that, but it sure bugs me.
I think this is one of my favourite write ups on HN for a while. I miss seeing more things like this.
Me too
... humans don't generally say
"Wanna grab lunch at 1,748,718,000 seconds from the Unix epoch?"
I'm totally going to start doing that now.
Obligatory falsehoods programmers believe about time:
https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...
In a nutshell if you believe anything about time, you're wrong, there is always an exception, and an exception to the exception. And then Doc Brown runs you over with the Delorean.
Marty!! We have to go back...
to string representations!