·6 min read

Unix Timestamp Explained: A Developer’s Quick Reference

Every developer eventually encounters a number like 1774051200 in a database or API response and wonders what it means. This guide explains Unix timestamps from the ground up, shows you how to convert them in every major language, and covers the edge cases you need to know.

What Is a Unix Timestamp?

A Unix timestamp (also called epoch time or POSIX time) is the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC — a moment known as the Unix epoch.

For example, the timestamp 0 represents midnight on January 1, 1970. The timestamp 1000000000 (one billion) represents September 9, 2001, at 01:46:40 UTC. Right now, as you read this, the current timestamp is a number somewhere around 1.77 billion.

Unix timestamps are used everywhere in computing: database records, API responses, log files, JWT tokens, cron jobs, and file system metadata. They provide a universal, timezone-independent way to represent a point in time as a single number.

Why Use Timestamps Instead of Date Strings?

Timestamps solve several problems that date strings create:

  • No timezone ambiguity. The string “March 19, 2026 3:00 PM” means different moments depending on whether you are in New York, London, or Tokyo. A timestamp is always UTC.
  • Easy arithmetic. Want to know the time 24 hours from now? Add 86400 (the number of seconds in a day). Try that with date strings and you are dealing with month boundaries, leap years, and daylight saving transitions.
  • Compact storage. A 32-bit integer takes 4 bytes. The ISO 8601 string “2026-03-19T15:00:00Z” takes 20 bytes. At scale, this matters.
  • Language-agnostic. Every programming language can parse a number. Date string formats vary wildly across locales and libraries.
  • Sortable and comparable. Comparing two timestamps is a simple integer comparison. Comparing date strings requires parsing them first.

Seconds vs. Milliseconds

There is one common source of confusion: some systems use seconds since epoch, while others use milliseconds. If you see a 10-digit number, it is seconds. If you see a 13-digit number, it is milliseconds.

Seconds:      1774051200       (10 digits)
Milliseconds: 1774051200000    (13 digits)

JavaScript’s Date.now() returns milliseconds. Python’s time.time() returns seconds (as a float). Most Unix tools, databases, and APIs use seconds. Always check the documentation for the system you are working with.

Converting Timestamps in Every Language

Here are the most common conversions you will need. You can also use our free Timestamp Converter to do this instantly in your browser.

JavaScript / TypeScript

// Current timestamp (milliseconds)
Date.now();                         // 1774051200000

// Current timestamp (seconds)
Math.floor(Date.now() / 1000);      // 1774051200

// Timestamp → Date object
new Date(1774051200 * 1000);        // 2026-03-19T00:00:00.000Z

// Date → timestamp (seconds)
Math.floor(new Date("2026-03-19").getTime() / 1000);

// Human-readable string
new Date(1774051200 * 1000).toISOString();
// "2026-03-19T00:00:00.000Z"

Python

import time
from datetime import datetime, timezone

# Current timestamp (seconds, float)
time.time()                          # 1774051200.123

# Timestamp → datetime (UTC)
dt = datetime.fromtimestamp(1774051200, tz=timezone.utc)
# datetime(2026, 3, 19, 0, 0, tzinfo=timezone.utc)

# datetime → timestamp
dt.timestamp()                       # 1774051200.0

# Human-readable
dt.isoformat()                       # "2026-03-19T00:00:00+00:00"

PHP

// Current timestamp
time();                              // 1774051200

// Timestamp → formatted date
date('Y-m-d H:i:s', 1774051200);    // "2026-03-19 00:00:00"

// Date string → timestamp
strtotime('2026-03-19');             // 1774051200

Command Line (Bash)

# Current timestamp
date +%s                              # 1774051200

# Timestamp → human-readable (GNU date)
date -d @1774051200                   # Thu Mar 19 00:00:00 UTC 2026

# Timestamp → human-readable (macOS date)
date -r 1774051200                    # Thu Mar 19 00:00:00 UTC 2026

Notable Epoch Milestones

TimestampDate (UTC)Event
0Jan 1, 1970The Unix epoch
1000000000Sep 9, 2001One billion seconds
1234567890Feb 13, 2009Sequential digits milestone
2000000000May 18, 2033Two billion seconds
2147483647Jan 19, 2038Y2K38 — 32-bit overflow

The Year 2038 Problem (Y2K38)

The Y2K38 problem is the Unix equivalent of the Y2K bug. A signed 32-bit integer can store a maximum value of 2,147,483,647. That number of seconds after the epoch corresponds to January 19, 2038, at 03:14:07 UTC. One second later, the value overflows and wraps around to a negative number, which the system interprets as a date in December 1901.

The fix is straightforward: use 64-bit integers for timestamps. Most modern operating systems, databases, and programming languages have already migrated. However, embedded systems, legacy databases, and older file formats that still use 32-bit timestamps will need updating before 2038.

If you are writing new code today, make sure you are storing timestamps as 64-bit integers (or using your language’s native date type, which typically handles this for you). Never use a 32-bit int or INT column for timestamps in new database schemas.

Timestamps vs. ISO 8601: When to Use Each

Unix timestamps and ISO 8601 date strings (like 2026-03-19T00:00:00Z) both represent points in time. Here is when to prefer each:

  • Use timestamps for internal storage, database columns, log correlation, token expiration (JWT exp claims), caching headers, and any context where compact size and easy math matter.
  • Use ISO 8601 for user-facing APIs where human readability matters, configuration files, data interchange formats (JSON responses to front-end clients), and anywhere you need to include timezone offset information.
  • Many systems use both. Store as a timestamp internally, serialize as ISO 8601 in your API response. This gives you the best of both worlds.

Common Pitfalls

  • Mixing seconds and milliseconds. This is the most common bug. Multiplying a millisecond timestamp by 1000 (thinking it is seconds) will give you a date thousands of years in the future. Always check whether your source provides seconds or milliseconds.
  • Ignoring timezones. A timestamp is always UTC by definition. But if you create a Date object from a local time string without specifying UTC, your resulting timestamp will be offset by your local timezone.
  • Negative timestamps. Dates before January 1, 1970 are represented as negative numbers. Most modern languages handle this correctly, but some older systems and databases do not.
  • Leap seconds. Unix time does not account for leap seconds. In practice, this rarely matters for application code, but it is worth knowing if you are working in scientific computing or high-precision time systems.

Convert Timestamps Instantly

Whether you are debugging an API response, checking a JWT expiration, or converting between date formats, having a quick converter at hand saves you from context-switching to a REPL or searching for the right library method.

Our free Timestamp Converter lets you paste a Unix timestamp and instantly see the human-readable date, or enter a date and get the corresponding timestamp. It runs entirely in your browser with no data sent to any server.

Try It Now — Free

Use our Timestamp Converter right in your browser. No signup, no upload to any server.

Open Timestamp Converter