Cloudflare Primitives
A visual guide to Cloudflare Primitives.
Workers
Edge Execution
Stateless compute at the nearest location
A Worker is a small piece of JavaScript deployed across Cloudflare’s global network. Each request executes your bundle in a fresh V8 isolate at the nearest data centre, reusing the isolate only if it remains warm.
In short: a Worker is a stateless function deployed globally, spawned & executed on demand at the nearest location.
Durable Objects
Global Convergence
Requests route to the single active instance
A Durable Object is a piece of code that runs as a single instance, tied to an ID. Cloudflare routes every request for that ID to that exact instance, no matter where it comes from.
In short: a Durable Object is a single, globally-routed, stateful instance bound to an ID.
Durable Objects
Lifecycle & state
Memory ephemeral. Storage persistent.
A Durable Object instance is created when a request for its ID arrives. Cloudflare starts a fresh instance of your code and gives it access to its stored data.
The instance only exists while it's being used. When it goes idle, Cloudflare shuts it down: in-memory is cleared, but stored data and identity remain.
Durable Objects
In-memory vs persistent storage
Memory ephemeral. Storage is persistent.
Durable Objects have two kinds of state.
In-memory state exists only while the instance is running. It's fast, but disappears when the instance shuts down.
Persistent storage survives restarts and hibernation. It keeps data tied to the object's ID even when no instance is running.
R2
Object Storage
Location-pinned storage with globally cached reads
R2 is Cloudflare's object storage for large, persistent files. Objects are stored in a specific storage location but can be read from any data centre.
When a data center handles a read, it checks its local caching first. If the object isn’t present, it retrieves it from the primary storage location, caches it, then returns the response.
This provides reliable, location-pinned storage with globally cached reads - without any egress fees.
D1
Database
A relational database for the edge
D1 is Cloudflare’s serverless SQL database, built on SQLite.
Each database has a primary location for writes and can optionally use read replicas in other locations to server reads closer to users.
Deploy a database (bound to your Worker) and let Cloudflare handle storage, consistency, and scaling.
Workers KV
Global Reads
Reads are served locally from the nearest replica
Workers KV is a globally distributed key-value store. When you write data, Cloudflare spreads it across its edge network.
Reads come from the nearest edge location, making it fast and ideal for read-heavy data. The trade-off is that updates take time to fully propagate, so reads aren't always immediately consistent.
Workers KV
Eventual Consistency
Writes are propagated asynchronously to global replicas
Workers KV is eventually consistent.
When you write a new value, it is first stored in a primary location. From there, KV asynchronously propagates the update to other edge locations. During this window, some locations may still serve the previous value while others have the new one.
Reads are therefore extremely fast but may be temporarily stale. KV is a great fit for data that can tolerate this behaviour, but it isn't suitable for real-time data or anything that requires strong consistency.
Queues
Message Queuing
Queued event delivery for background processing.
Queues, as the name suggests, are Cloudflare's message queue for async work. When you enqueue a message, Cloudflare stores it and later invokes your consumer Worker with a batch of messages to process.
You define the message format and processing logic; Cloudflare takes care of delivery, batching, retries, and scaling.
npx clementine-cli Hyperdrive
Database Acceleration
Connection pooling and query caching at the edge
Hyperdrive accelerates access to existing databases by managing connection pooling and caching at the edge.
Without Hyperdrive, each Worker request to a database involves multiple round-trips for connection setup, adding latency. Hyperdrive moves this overhead to the edge, maintaining warm connections and caching read queries.
Writes bypass the cache and flow through the connection pool directly to your database, while reads are served from the cache when possible.
Pipelines
Streaming Data Pipelines
Ingest, transform, and load streaming data.
Cloudflare Pipelines lets you take streams of events from anywhere, clean them up with simple SQL, and automatically save the results into R2 as Iceberg tables or as Parquet/JSON files.
Pipelines handles ingestion, batching, transformation, and writing to storage - you just write the SQL that describes how your data should look.
Workflows
Orchestration
Run multi-step logic that survives failures and restarts
Cloudflare Workflows runs your logic step-by-step and automatically saves state as it goes.
Each workflow executes sequentially through defined steps, with the ability to sleep for extended periods while maintaining state.
Perfect for long-running business processes, scheduled jobs, and complex operations requiring reliability.