Unix timestamps and why the Year 2038 problem still matters
The Unix timestamp (Epoch seconds) is the lingua franca of time across programming languages, databases, and APIs. It is mechanically simple, but the Year 2038 problem is still around — and as of now, less than ten years away.
The definition: seconds since 1970-01-01 00:00:00 UTC
A Unix timestamp is:
The number of seconds elapsed since 1970-01-01 00:00:00 UTC. Examples:
| Date/time (UTC) | Unix timestamp |
|---|---|
| 1970-01-01 00:00:00 | 0 |
| 1970-01-01 00:00:01 | 1 |
| 2000-01-01 00:00:00 | 946,684,800 |
| 2024-01-01 00:00:00 | 1,704,067,200 |
| 2038-01-19 03:14:07 | 2,147,483,647 ← max value of int32 |
Leap seconds are ignored. Days are always 86,400 seconds. This is a deliberate simplification that diverges slightly from astronomical time but makes time arithmetic across systems tractable.
Why January 1, 1970?
“Because Unix was being built.” Nothing more profound. Unix development started in 1969, and the implementers picked the most recent round date as the epoch (1970-01-01).
There’s no historical significance, so other systems use other epochs. Conversion is a constant offset:
- Old Mac OS — January 1, 1904
- VMS — November 17, 1858
- NTP — January 1, 1900
- Windows FILETIME — January 1, 1601 (in 100-nanosecond units)
- GPS time — January 6, 1980
What the Year 2038 problem actually is
When a Unix timestamp is stored in a 32-bit signed integer (the historical time_t), the maximum value is 2^31 - 1 = 2,147,483,647.
That value is reached at:
2038-01-19 03:14:07 UTC One second later, the counter overflows to a negative number in 32-bit signed arithmetic:
2147483647 + 1 = -2147483648 (32-bit signed) Negative timestamps represent dates before 1970, so the system jumps back to December 13, 1901. That’s the Y2038 problem.
Where it still lurks
“PCs are 64-bit, this is solved” is half-true. The remaining exposure is meaningful.
1. Embedded systems
Routers, industrial equipment, medical devices, automotive ECUs, IoT — many still ship with 32-bit CPUs and 32-bit time_t. The combination is hard to retire because:
- Long lifecycles (10–20 years).
- Updates are difficult for fielded devices.
- Re-certification is expensive in regulated industries.
New designs that pick a 32-bit MCU plus a cost-optimized C library can still inherit 32-bit time_t today.
2. Legacy databases
A timestamp stored as int4 (32 bit) breaks at 2038 unless migrated:
-- ❌ breaks at 2038
CREATE TABLE events (
id serial PRIMARY KEY,
ts integer NOT NULL -- Unix timestamp as int32
);
-- ✅ 64-bit, no problem
CREATE TABLE events (
id serial PRIMARY KEY,
ts bigint NOT NULL
);
-- ✅ better: use a real timestamp type
CREATE TABLE events (
id serial PRIMARY KEY,
ts timestamptz NOT NULL
); PostgreSQL’s timestamptz is 64-bit internally and is fine. MySQL’s TIMESTAMP type is 4 bytes, so MySQL has its own Y2038 problem. Use DATETIME or store Unix seconds as bigint explicitly.
3. Old filesystems and archive formats
Older ext3/ext4 versions, ZIP, and old tar formats store file timestamps as 32-bit. Restoring a backup post-2038 can show all files dated 1901.
ext4 supports 64-bit timestamps from 2014 onward; tar’s pax extensions support 64-bit. Not everywhere uses the new formats yet.
4. Network time protocols
NTP encodes time as 32-bit seconds + 32-bit fraction with a 1900 epoch, hitting its limit in 2036. NTPv4 has a version-bit mechanism for the rollover, but adoption is incomplete.
Mitigations for new code
- Use 64-bit
time_t: Linux glibc 2.38+ supports_TIME_BITS=64. - Use
bigintor a native timestamp type in the database. - Return ISO 8601 strings in API responses:
2026-04-26T12:00:00Zis bit-width-agnostic.
For audits of existing systems:
- Check column types (any
intorint4storing time?). - Check archive formats (legacy tar, old ZIP).
- Check 32-bit services and scripts.
Milliseconds: the other tripwire
JavaScript’s Date.now() returns milliseconds. Mixing seconds and milliseconds is a classic 1000x bug:
const seconds = Math.floor(Date.now() / 1000); // seconds
const ms = seconds * 1000; // milliseconds JWT’s exp/iat are seconds; the Date API is milliseconds. This implicit convention catches many implementations.
For 64-bit milliseconds, the upper bound is roughly the year 290,000,000. Not a concern.
Converting between epochs
Conversion is just adding/subtracting an offset:
NTP → Unix: ntp_seconds - 2208988800
(seconds from 1900-01-01 to 1970-01-01)
Windows FILETIME → Unix:
(filetime - 116444736000000000) / 10000000
(100-ns units from 1601-01-01 to 1970-01-01)
JavaScript Date → Unix:
Math.floor(date.getTime() / 1000) Don’t memorize the magic numbers — use a converter when you need them.
Summary
- Unix timestamp = seconds since 1970-01-01 UTC.
- 32-bit
time_toverflows at January 19, 2038. - Real exposure remains in embedded, old DBs, archive formats, and NTP.
- New code: use 64-bit, prefer ISO 8601 strings.
For converting between Unix seconds and human-readable dates, the Epoch batch tool on this site handles both single values and bulk lists. Useful when you’ve grabbed a column of timestamps from a log and want them in local time.