Unix Timestamps Explained: The Complete Developer Guide

By Suvom Das March 12, 2026 18 min read

1. What Is a Unix Timestamp?

A Unix timestamp -- also called epoch time, POSIX time, or simply "seconds since epoch" -- is a way of representing a specific moment in time as a single number. It counts the number of seconds that have elapsed since a fixed reference point: January 1, 1970 at 00:00:00 Coordinated Universal Time (UTC).

For example, the Unix timestamp 1710288000 represents March 13, 2024 at 00:00:00 UTC. The timestamp 0 represents the very start of January 1, 1970 UTC. Negative timestamps represent dates before the epoch -- -86400 is December 31, 1969 at 00:00:00 UTC.

Unix timestamps are used everywhere in software development: in database records, API responses, log files, authentication tokens, file systems, and caching headers. They are compact (a single integer), unambiguous (no timezone confusion), and easy to compare and sort. If you need to know which of two events happened first, comparing two integers is far simpler than parsing and comparing two date strings.

The beauty of Unix timestamps lies in their universality. The timestamp 1710288000 means the same thing in New York, Tokyo, London, and Sydney. It always represents the exact same instant, regardless of the observer's timezone. Timezone considerations only matter when you convert the timestamp into a human-readable date string for display purposes.

2. History: The Unix Epoch

The concept of an epoch -- a fixed starting point from which time is measured -- is not unique to Unix. Astronomers use the Julian Date system (starting from January 1, 4713 BC). The Windows FILETIME epoch starts on January 1, 1601. The macOS Classic epoch began on January 1, 1904. But the Unix epoch, January 1, 1970, has become the most widely used reference point in modern computing.

The choice of January 1, 1970 was pragmatic rather than symbolic. When Ken Thompson and Dennis Ritchie were developing the early Unix operating system at Bell Labs in the late 1960s and early 1970s, they needed a time representation that was efficient for the hardware of the era. The original Unix time was stored as a 32-bit unsigned integer counting sixtieths of a second. When the system was redesigned in 1971, the epoch was set to January 1, 1971. It was later moved to January 1, 1970 for a round number, and the resolution was changed to whole seconds to extend the range of representable dates.

The decision to count seconds rather than days, hours, or some other unit struck a practical balance. Seconds provide enough precision for most system operations while keeping the numbers manageable. A 32-bit signed integer counting seconds from 1970 can represent dates up to January 2038 -- a range of about 68 years in each direction from the epoch, which seemed more than sufficient at the time.

As Unix gained adoption through the 1970s and 1980s, its timestamp format spread to virtually every operating system and programming language. Today, the Unix epoch is the de facto standard for machine-readable time representation, codified in the POSIX standard and supported by every major programming language, database, and operating system.

3. How Timestamps Work

At its core, a Unix timestamp is simple arithmetic. Each day has 86,400 seconds (24 hours times 60 minutes times 60 seconds). To convert a date to a timestamp, you count the total seconds from January 1, 1970 00:00:00 UTC to that date. To convert a timestamp back to a date, you divide by 86,400 to get the number of days, then compute the year, month, day, hour, minute, and second from there.

Here are some landmark timestamps to build intuition:

Timestamp          Date (UTC)
---------------------------------------------------------
0                  January 1, 1970 00:00:00 (the epoch)
86400              January 2, 1970 00:00:00 (1 day)
946684800          January 1, 2000 00:00:00 (Y2K)
1000000000         September 9, 2001 01:46:40 (billennium)
1234567890         February 13, 2009 23:31:30
1700000000         November 14, 2023 22:13:20
2000000000         May 18, 2033 03:33:20
2147483647         January 19, 2038 03:14:07 (32-bit max)

One important nuance: Unix time does not count leap seconds. The POSIX standard defines a day as exactly 86,400 seconds, even though actual UTC days occasionally have 86,401 seconds due to leap second insertions. When a leap second occurs, Unix time effectively "repeats" one second. This means Unix timestamps are not strictly equal to the number of SI seconds since the epoch, but the simplification makes timestamp arithmetic much more straightforward. For the vast majority of applications, this distinction is irrelevant.

Converting between timestamps and dates involves accounting for leap years (every 4 years, except centuries, except every 400 years) and varying month lengths. Fortunately, every programming language provides built-in functions for these conversions, so you rarely need to do this math manually.

4. Seconds vs Milliseconds

One of the most common sources of confusion when working with timestamps is the difference between seconds-based and milliseconds-based timestamps. Different languages, APIs, and systems use different precisions, and mixing them up produces wildly incorrect dates.

A seconds-based timestamp is typically 10 digits long (as of 2026). For example: 1710288000. A milliseconds-based timestamp is 13 digits long: 1710288000000. The millisecond version is simply the seconds version multiplied by 1,000, with additional precision for sub-second timing.

Which Languages Use Which?

# Seconds (10 digits)
Python:     time.time()              # 1710288000.123456 (float)
Go:         time.Now().Unix()        # 1710288000
Bash:       date +%s                 # 1710288000
PHP:        time()                   # 1710288000
Ruby:       Time.now.to_i            # 1710288000

# Milliseconds (13 digits)
JavaScript: Date.now()               # 1710288000123
Java:       System.currentTimeMillis() # 1710288000123
Dart:       DateTime.now().millisecondsSinceEpoch  # 1710288000123

# Microseconds (16 digits)
Python:     time.time_ns() // 1000   # 1710288000123456
Go:         time.Now().UnixMicro()   # 1710288000123456

# Nanoseconds (19 digits)
Go:         time.Now().UnixNano()    # 1710288000123456789
Python:     time.time_ns()           # 1710288000123456789

A quick way to tell them apart: if the number is around 1.7 billion (as of 2026), it is seconds. If it is around 1.7 trillion, it is milliseconds. If it is even larger, it is microseconds or nanoseconds.

Accidentally treating a millisecond timestamp as seconds (or vice versa) produces absurd results. Interpreting 1710288000000 (milliseconds for March 2024) as seconds gives a date in the year 56,189. Interpreting 1710288000 (seconds) as milliseconds gives January 20, 1970 -- just 20 days after the epoch.

Tip: When receiving timestamps from an external source, always check the documentation to confirm whether the values are in seconds, milliseconds, or another unit. If documentation is unavailable, the digit count is your best clue.

5. The Year 2038 Problem

The Year 2038 problem, sometimes called the "Unix Millennium Bug" or Y2K38, is the most significant limitation of the original Unix timestamp format. It arises from the use of a 32-bit signed integer to store the timestamp.

A 32-bit signed integer can hold values from -2,147,483,648 to 2,147,483,647. The maximum positive value, 2,147,483,647, corresponds to January 19, 2038 at 03:14:07 UTC. One second later, the value overflows: the bit pattern wraps around to -2,147,483,648, which the system interprets as December 13, 1901 at 20:45:52 UTC. Time, from the perspective of the affected system, jumps backward by 136 years.

# The critical moment
2147483647  =  January 19, 2038  03:14:07 UTC  (max 32-bit signed)
2147483648  =  overflow! wraps to -2147483648
-2147483648 =  December 13, 1901  20:45:52 UTC

This is not a theoretical concern. Any system that stores Unix timestamps in 32-bit integers will malfunction when those timestamps exceed this limit. This includes embedded systems, IoT devices, older databases, legacy file formats, and network protocols. Systems that process future dates -- such as mortgage calculators, insurance software, certificate expiration checkers, and long-term scheduling tools -- may already be affected today if they need to represent dates beyond 2038.

The Solution: 64-bit Timestamps

The fix is straightforward: use 64-bit integers instead of 32-bit ones. A 64-bit signed integer can represent timestamps up to approximately 292 billion years in the future -- far beyond any practical need. Most modern operating systems, programming languages, and databases have already made this transition:

The remaining risk lies in embedded systems, legacy code, binary file formats with fixed 32-bit timestamp fields, and network protocols that have not been updated. If you are building new software, always use 64-bit timestamps. If you are maintaining older systems, audit your timestamp storage to ensure you are not using 32-bit types.

6. Timestamps and Timezones

One of the greatest advantages of Unix timestamps is that they are inherently timezone-independent. A timestamp is defined as seconds since the epoch in UTC. The number 1710288000 always means the same instant, everywhere on Earth. Timezones only enter the picture when you need to display that instant as a human-readable date and time.

Consider the timestamp 1710288000, which is March 13, 2024 at 00:00:00 UTC. The same instant is displayed differently depending on the timezone:

Timestamp: 1710288000

UTC:              March 13, 2024  00:00:00
US Eastern (EST): March 12, 2024  19:00:00  (UTC-5)
US Pacific (PST): March 12, 2024  16:00:00  (UTC-8)
India (IST):      March 13, 2024  05:30:00  (UTC+5:30)
Japan (JST):      March 13, 2024  09:00:00  (UTC+9)
Australia (AEDT): March 13, 2024  11:00:00  (UTC+11)

This is why timestamps are ideal for storage and transmission: they are unambiguous. You only convert to local time at the "edges" of your system -- in the user interface, in reports, and in notifications directed at humans.

Daylight Saving Time

Daylight saving time (DST) is one of the most treacherous aspects of time handling. When clocks "spring forward," one hour is skipped -- 2:00 AM jumps to 3:00 AM. When clocks "fall back," one hour is repeated -- 1:00 AM through 1:59 AM occurs twice. Unix timestamps are immune to this confusion because they are always in UTC, which does not observe DST.

However, problems arise when converting timestamps to local times during DST transitions. If your application schedules an event for "2:30 AM Eastern Time" on a spring-forward night, that time does not exist. If it schedules for "1:30 AM Eastern Time" on a fall-back night, that time occurs twice. Always store times in UTC and convert to local time only for display. Use the IANA timezone database (e.g., "America/New_York") rather than fixed offsets (e.g., "EST" or "UTC-5") to handle DST transitions correctly.

7. Converting Timestamps in Code

Every major programming language provides built-in functions for working with Unix timestamps. Here are the most common conversions in JavaScript, Python, Go, and SQL.

JavaScript

// Get current timestamp (seconds)
const now = Math.floor(Date.now() / 1000);
// 1710288000

// Timestamp to Date object
const date = new Date(1710288000 * 1000); // JS uses milliseconds
// Wed Mar 13 2024 00:00:00 GMT+0000

// Date to timestamp
const ts = Math.floor(new Date('2024-03-13T00:00:00Z').getTime() / 1000);
// 1710288000

// Format as ISO 8601
const iso = new Date(1710288000 * 1000).toISOString();
// "2024-03-13T00:00:00.000Z"

// Format in a specific timezone
const local = new Date(1710288000 * 1000).toLocaleString('en-US', {
  timeZone: 'America/New_York'
});
// "3/12/2024, 7:00:00 PM"

Python

import time
from datetime import datetime, timezone

# Get current timestamp (seconds)
now = int(time.time())
# 1710288000

# Timestamp to datetime (UTC)
dt = datetime.fromtimestamp(1710288000, tz=timezone.utc)
# datetime(2024, 3, 13, 0, 0, tzinfo=timezone.utc)

# Datetime to timestamp
ts = int(datetime(2024, 3, 13, tzinfo=timezone.utc).timestamp())
# 1710288000

# Format as ISO 8601
iso = dt.isoformat()
# "2024-03-13T00:00:00+00:00"

# Parse ISO 8601 string to timestamp
parsed = datetime.fromisoformat("2024-03-13T00:00:00+00:00")
ts = int(parsed.timestamp())
# 1710288000

Go

package main

import (
    "fmt"
    "time"
)

func main() {
    // Get current timestamp (seconds)
    now := time.Now().Unix()
    // 1710288000

    // Timestamp to Time
    t := time.Unix(1710288000, 0).UTC()
    // 2024-03-13 00:00:00 +0000 UTC

    // Time to timestamp
    ts := time.Date(2024, 3, 13, 0, 0, 0, 0, time.UTC).Unix()
    // 1710288000

    // Format as ISO 8601 / RFC 3339
    iso := t.Format(time.RFC3339)
    // "2024-03-13T00:00:00Z"

    // Parse RFC 3339 string
    parsed, _ := time.Parse(time.RFC3339, "2024-03-13T00:00:00Z")
    fmt.Println(parsed.Unix())
    // 1710288000
}

SQL

-- PostgreSQL: current timestamp
SELECT EXTRACT(EPOCH FROM NOW())::bigint;

-- PostgreSQL: timestamp to date
SELECT TO_TIMESTAMP(1710288000);
-- 2024-03-13 00:00:00+00

-- PostgreSQL: date to timestamp
SELECT EXTRACT(EPOCH FROM TIMESTAMP '2024-03-13 00:00:00 UTC')::bigint;
-- 1710288000

-- MySQL: current timestamp
SELECT UNIX_TIMESTAMP();

-- MySQL: timestamp to date
SELECT FROM_UNIXTIME(1710288000);
-- 2024-03-13 00:00:00

-- MySQL: date to timestamp
SELECT UNIX_TIMESTAMP('2024-03-13 00:00:00');
-- 1710288000

8. ISO 8601 and RFC 3339 Date Formats

While Unix timestamps are ideal for machine processing, they are not human-readable. When dates need to be displayed, logged, or included in APIs for human consumption, standardized string formats are preferred. The two most important standards are ISO 8601 and RFC 3339.

ISO 8601

ISO 8601 is an international standard for date and time representation. It defines a comprehensive set of formats, but the most commonly used form looks like this:

2024-03-13T14:30:00Z          (UTC)
2024-03-13T09:30:00-05:00     (US Eastern, UTC-5)
2024-03-13T23:00:00+08:00     (China Standard Time, UTC+8)
2024-03-13                    (date only)
14:30:00                      (time only)
2024-W11-3                    (week date: year, week 11, Wednesday)

The T separates the date and time portions. The Z suffix indicates UTC (from "Zulu" in the NATO phonetic alphabet). A numeric offset like -05:00 indicates the timezone offset from UTC.

RFC 3339

RFC 3339 is a profile (subset) of ISO 8601 specifically designed for use in internet protocols. It is stricter than ISO 8601 -- requiring four-digit years, two-digit months and days, the T separator, and a timezone offset or Z. In practice, most developers use ISO 8601 and RFC 3339 interchangeably, since the most common ISO 8601 format is also valid RFC 3339:

# Valid RFC 3339 (and ISO 8601)
2024-03-13T14:30:00Z
2024-03-13T14:30:00.123Z
2024-03-13T09:30:00-05:00

# Valid ISO 8601 but NOT valid RFC 3339
2024-W11-3              (week dates)
20240313T143000Z        (basic format without hyphens)
2024-073                (ordinal dates)

When to Use Which Format

In general, use Unix timestamps for internal storage, data transfer between services, and any context where comparison and arithmetic are important. Use ISO 8601 / RFC 3339 strings for API responses intended for external consumers, log messages, configuration files, and user-facing displays. Many modern APIs accept both formats and let clients choose their preference.

9. Timestamps in Databases

Databases provide various data types and functions for working with timestamps. Choosing the right type affects storage efficiency, query performance, and timezone handling.

PostgreSQL

PostgreSQL offers two primary timestamp types:

-- Create table with timestamptz
CREATE TABLE events (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    event_time TIMESTAMPTZ NOT NULL
);

-- Insert with explicit timezone
INSERT INTO events (name, event_time)
VALUES ('deploy', '2024-03-13T14:00:00-05:00');

-- Query: convert to specific timezone
SELECT name, event_time AT TIME ZONE 'America/New_York' AS local_time
FROM events;

-- Query: extract epoch (Unix timestamp)
SELECT name, EXTRACT(EPOCH FROM event_time)::bigint AS unix_ts
FROM events;

MySQL

MySQL provides DATETIME and TIMESTAMP types:

The MySQL TIMESTAMP type is subject to the Year 2038 problem due to its 32-bit internal storage. For new schemas, consider using DATETIME with explicit UTC handling, or store Unix timestamps as BIGINT columns.

-- Create table
CREATE TABLE events (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    event_time_unix BIGINT NOT NULL
);

-- Insert
INSERT INTO events (name, event_time_unix) VALUES ('deploy', 1710338400);

-- Convert stored timestamp to date
SELECT name, FROM_UNIXTIME(event_time_unix) AS event_date FROM events;

MongoDB

MongoDB stores dates using the BSON Date type, which internally uses a 64-bit integer representing milliseconds since the Unix epoch. This gives it a range far beyond the 2038 limit and millisecond precision out of the box.

// Insert with JavaScript Date
db.events.insertOne({
    name: "deploy",
    createdAt: new Date(),                        // current time
    eventTime: new Date(1710288000 * 1000)        // from Unix timestamp
});

// Query events after a specific timestamp
db.events.find({
    eventTime: { $gte: new Date(1710288000 * 1000) }
});

// Aggregation: extract Unix timestamp
db.events.aggregate([{
    $project: {
        name: 1,
        unixTimestamp: {
            $divide: [{ $toLong: "$eventTime" }, 1000]
        }
    }
}]);

10. Timestamps in APIs and Distributed Systems

Timestamps play a critical role in APIs and distributed systems, where they are used for event ordering, cache control, rate limiting, authentication, and data synchronization.

API Design Conventions

When designing APIs, you need to decide how to represent timestamps. The two dominant conventions are:

Some APIs support both, letting clients specify their preference via query parameters or headers. Whichever format you choose, be consistent across your entire API surface and document it clearly. Mixing formats within a single API is a common source of bugs for consumers.

Clock Synchronization

In distributed systems, different servers may have slightly different clock times. This "clock skew" can cause problems with timestamp-dependent logic like event ordering, distributed locking, and cache invalidation. The Network Time Protocol (NTP) keeps server clocks synchronized, typically to within a few milliseconds. For stricter requirements, services like AWS Time Sync Service and Google TrueTime provide sub-millisecond accuracy.

If your system requires strict ordering of events across multiple nodes, consider using logical clocks (Lamport clocks, vector clocks) or hybrid logical clocks instead of relying solely on wall-clock timestamps. These approaches provide causal ordering guarantees that physical timestamps cannot.

JWT Token Expiration

JSON Web Tokens (JWTs) use Unix timestamps extensively. The iat (issued at), exp (expiration), and nbf (not before) claims are all specified as seconds-based Unix timestamps per RFC 7519:

{
  "sub": "user123",
  "name": "Alice",
  "iat": 1710288000,
  "exp": 1710374400,
  "nbf": 1710288000
}

When validating a JWT, the server compares the current Unix timestamp against the exp claim. If current_time > exp, the token is expired. This comparison is a simple integer operation, which is one reason the JWT specification chose Unix timestamps over date strings.

HTTP Caching Headers

The HTTP protocol uses timestamps in several headers. The Expires header uses RFC 7231 date format (a derivative of RFC 822), while the Cache-Control: max-age directive uses seconds. The Last-Modified and If-Modified-Since headers also use HTTP date format:

Expires: Wed, 13 Mar 2024 00:00:00 GMT
Cache-Control: max-age=3600
Last-Modified: Tue, 12 Mar 2024 18:00:00 GMT

Rate Limiting

Rate limiting systems commonly use timestamps to track request windows. Headers like X-RateLimit-Reset often contain a Unix timestamp indicating when the rate limit window resets:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 23
X-RateLimit-Reset: 1710291600

11. Best Practices for Handling Time

Time handling is one of the most error-prone areas of software development. These best practices, learned from decades of collective experience, will help you avoid the most common pitfalls.

Store and Transmit in UTC

This is the single most important rule. Always store timestamps in UTC, whether as Unix timestamps (which are UTC by definition) or as ISO 8601 strings with the Z suffix. Convert to local timezones only at the presentation layer, as close to the user as possible. This eliminates ambiguity, simplifies comparison and arithmetic, and avoids daylight saving time issues in your data layer.

Use the Right Precision

Choose the timestamp precision that matches your use case. Seconds are sufficient for most application events, scheduling, and caching. Milliseconds are appropriate for performance measurement, user interaction tracking, and database operations. Microseconds or nanoseconds are needed for high-frequency trading, scientific measurement, and low-level system profiling. Using excessive precision wastes storage and bandwidth; using insufficient precision loses important information.

Use 64-bit Integer Storage

Never store Unix timestamps in 32-bit integers in new code. Use 64-bit integers (bigint in SQL, int64 in Go, long in Java) to avoid the Year 2038 problem. If you encounter 32-bit timestamp storage in existing code, plan a migration before it becomes urgent.

Use IANA Timezone Names

When you need to work with timezones, use IANA timezone identifiers like America/New_York, Europe/London, or Asia/Tokyo rather than abbreviations like EST, GMT, or JST. IANA names are unambiguous (CST could mean Central Standard Time, China Standard Time, or Cuba Standard Time) and automatically handle daylight saving time transitions, including historical rule changes.

Document Your Timestamp Format

In API documentation, database schemas, and configuration files, always document whether timestamps are in seconds or milliseconds, and whether date strings follow ISO 8601, RFC 3339, or some other format. Include example values. Never assume consumers will guess correctly.

Validate Timestamp Inputs

When accepting timestamps from external sources, validate them. Check that the value is within a reasonable range (not in the distant past or far future unless intentional). Detect whether the input is in seconds or milliseconds by checking the digit count. Reject obviously invalid values early with clear error messages.

Handle Clock Skew Gracefully

In distributed systems, allow for small amounts of clock skew in your comparisons. Instead of checking token.exp > now, consider adding a small tolerance: token.exp > now - 30. This prevents valid tokens from being rejected due to minor clock differences between servers.

Be Careful with Date Arithmetic

Adding "one month" to a date is ambiguous. Is one month 28, 29, 30, or 31 days? What about January 31 plus one month -- is that February 28 or March 3? Use well-tested date libraries (like Luxon for JavaScript, dateutil for Python, or the time package in Go) for date arithmetic rather than doing manual calculations with timestamps. Similarly, adding "one day" (86,400 seconds) can give unexpected results around DST transitions in local time.

Test with Edge Cases

Test your time-handling code with these known edge cases: the epoch itself (timestamp 0), negative timestamps (pre-1970 dates), the Year 2038 boundary, DST transition days, leap years (February 29), the end of December/start of January (year boundaries), and different timezone offsets including half-hour offsets like India (UTC+5:30) and Nepal (UTC+5:45).

12. Using Our Free Timestamp Converter

Working with Unix timestamps does not require memorizing conversion formulas or writing throwaway scripts. Our free Unix Timestamp Converter makes it easy to convert between timestamps and human-readable dates instantly.

Timestamp to Date

Paste any Unix timestamp -- whether in seconds, milliseconds, microseconds, or nanoseconds -- and the tool automatically detects the precision and displays the corresponding date and time in multiple formats: UTC, your local timezone, ISO 8601, and RFC 3339. You can also see the relative time (e.g., "3 days ago" or "in 2 hours").

Date to Timestamp

Select a date and time using the intuitive picker or type a date string, and get the corresponding Unix timestamp in seconds and milliseconds. Choose your timezone and see the UTC equivalent side by side.

Live Current Timestamp

See the current Unix timestamp updating in real time. Copy it to your clipboard with a single click for use in scripts, API testing, or debugging.

Whether you are debugging an API response, investigating a log entry, setting token expiration times, or verifying database records, our tool gives you the answer in seconds. No sign-up required, no data sent to any server -- everything runs locally in your browser.

Convert Unix Timestamps Instantly

Stop writing one-off scripts to decode timestamps. Use our free Unix Timestamp Converter to translate between epoch time and human-readable dates with automatic precision detection.

Try the Timestamp Converter Now

Related Articles

Understanding JSON Web Tokens: A Complete Guide

Learn how JWTs work, decode tokens, understand claims like iat and exp, and master token-based authentication.

JSON Formatting and Validation: A Complete Guide

Master JSON syntax, formatting, validation, and best practices for working with JSON data in modern applications.

Cron Expressions Explained: Complete Guide to Cron Syntax

Master the 5-field cron format, special characters, 20+ examples, and cron scheduling across Kubernetes, GitHub Actions, and AWS.