Build reliable
software effortlessly

Add durable, observable workflows & queues to your code in minutes.
Make apps resilient to any failure: the fastest path to production-ready.

Build with your stack
SOC 2 Compliant
2024 Gartner® Cool Vendor™

Durable workflows, with endless possibilities

Build reliable AI agents

Use durable workflows to build reliable, fault-tolerant AI agents.

@DBOS.workflow()
def agentic_research_workflow(topic, max_iterations):
    research_results = []

    for i in range(max_iterations):
        # Run a query based on the current topic
        research_result = research_query(topic)
        research_results.append(research_result)

        # Stop if the results suggest no further research is needed
        if not should_continue(research_results):
            break

        # Refine the topic for the next research iteration
        topic = generate_next_topic(topic, research_results)

    # Combine all collected results into a final report
    return synthesize_research_report(research_results)

@DBOS.step()
def research_query(topic):
    ...

Orchestrate durable workflows

Write your business logic in normal code, with branches, loops, subtasks, and retries. DBOS makes it resilient to any failure.

# Define a durable checkout workflow
@DBOS.workflow()
def checkout_workflow(items):
    # Step 1: Create the order
    order = create_order()

    # Step 2: Reserve inventory for the items
    reserve_inventory(order, items)

    # Step 3: Process the payment
    payment_status = process_payment(order, items)

    # Step 4: If paid, fulfill the order
    if payment_status == 'paid':
        fulfill_order(order)
    else:
        # If payment fails, release inventory and cancel the order
        undo_reserve_inventory(order, items)
        cancel_order(order)

Scale reliably with durable queues

Use durable queues to simplify fault-tolerant orchestration of thousands of concurrent tasks. Control how many tasks can run concurrently or how often tasks can start.

# Create a named queue for indexing tasks
queue = Queue("indexing_queue")

# Define a durable workflow that indexes a list of URLs
@DBOS.workflow()
def indexing_workflow(urls: List[HttpUrl]):
    handles: List[WorkflowHandle] = []

    # Enqueue a document indexing task for each URL
    for url in urls:
        handle = queue.enqueue(index_document, url)
        handles.append(handle)

    indexed_pages = 0

    # Wait for each indexing task to complete and tally results
    for handle in handles:
        indexed_pages += handle.get_result()

    # Log the total indexed pages
    logger.info(f"Indexed {len(urls)} documents totaling {indexed_pages} pages")

Process your events exactly once

Consume events exactly-once, no need to worry about timeouts or offsets.

# Listen for new messages on the "alerts-topic" Kafka topic
@DBOS.kafka_consumer(config, ["alerts-topic"])

# Define a durable workflow triggered by each Kafka message
@DBOS.workflow()
def process_kafka_alerts(msg: KafkaMessage):
    # Decode the Kafka message payload
    alerts = msg.value.decode()
    
    # Loop through each alert and respond accordingly
    for alert in alerts:
        respond_to_alert(alert)

Cron jobs made easy

Schedule your durable workflows to run exactly once per time interval. Record a stock's price once a minute, migrate some data once every hour, or send emails to inactive users once a week.

# Schedule this workflow to run every hour
@DBOS.scheduled("0 * * * *")

# Define a durable workflow that takes scheduled and actual time
@DBOS.workflow()
def run_hourly(scheduled_time: datetime, actual_time: datetime):
    # Search Hacker News for the keyword "serverless"
    results = search_hackernews("serverless")
    
    # Post each result (comment and URL) to Slack
    for comment, url in results:
        post_to_slack(comment, url)

Launch durable background tasks.

Workflows guarantee background tasks eventually complete, despite restarts and failures. Durable sleep and notifications let your tasks wait for days or weeks, or for a notification, before continuing.

@DBOS.workflow()
def schedule_reminder(to_email, days_to_wait):
    DBOS.recv(days_to_seconds(days_to_wait))
    send_reminder_email(to_email, days_to_wait)

@app.post("/email")
def email_endpoint(request):
    DBOS.start_workflow(
        schedule_reminder, 
        request.email, 
        request.days
)
Why DBOS

Durable workflows done right

Add durable workflows to your app in just a few lines of code. No additional infrastructure required.

No extra servers

Run anywhere, from your own hardware to any cloud. No new infrastructure required.

No rearchitecting

Add a few annotations to your code to make it durable. Nothing else needed.

No privacy issues

We never access your data. It stays private and under your control

Customer Stories

A new approach to durable execution

Hear how companies like Dosu, Soria, and Yutori ship durable workflows with DBOS at their core.

Soria Analytics

"DBOS made highly parallelized, long-running AI workflows much easier to scale and observe."

Dosu

"Durable orchestration and observability without having to run additional infrastructure.”

Yutori AI

"DBOS fits our code, unlike Temporal which forces us to rewrite code to fit Temporal."

“I'm amazed at the ease, and amazed at how much I didn't have to think about my queuing and job infrastructure.”

Cameron Spiller

Co-Founder & CTO, Soria

“To switch to Temporal, we would need to rewrite our code to suit...we realized that with DBOS we can just add some decorators to our existing code.”

Yunfan Ye

Technical Staff, Yutori AI

Durable workflows

Make code reliable in minutes

Add a few annotations to your code to make it durable.
So if your application crashes or restarts, it automatically resumes your workflows from the last completed step.

Automatic failure recovery
Built-in observability
Durable execution guaranteed
Fetch data inputs
Retrieve raw inputs
Pending
Transform data
Enrich with external APIs
Pending
Store to database
Persist to durable store
Pending
Transform data
Enrich with external APIs
Failed
Fetch data inputs
Retrieve raw inputs
Success
Fetch data inputs
Retrieve raw inputs
Pending
Transform data
Enrich with external APIs
Pending
Store to database
Persist to durable store
Pending
Transform data
Enrich with external APIs
Failed
Transform data
Enrich with external APIs
DBOS Recovering
Fetch data inputs
Retrieve raw inputs
Success
Transform data
Enrich with external APIs
Success
Store to database
Persist to durable store
Success
>_ System Console
Live
07:49:04
[SUCCESS]
Step 1: fetch_data_inputs() successful
07:49:06
[ERROR]
Step 2: transform_data() API failed
07:49:08
[CRASH]
Data pipeline terminated at Step 2
07:49:04
[SUCCESS]
Step 1: fetch_data_inputs() successful
07:49:06
[ERROR]
Step 2: transform_data() API failed
07:49:07
[DBOS]
Step 2: transform_data() recovering
07:49:09
[SUCCESS]
Step 2: transform_data() successful
07:49:10
[SUCCESS]
Step 3: store_db() successful

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Durable execution

Never lose progress—functions resume from the last successful step, even after crashes.

Built-in observability

Interactively view, search, and manage your workflows from a graphical UI

Durable queues

Lightweight, durable, distributed queues backed by Postgres

Autoscale

DBOS will automatically scale your application to meet requests and notify you as limits approach.

Host anywhere

Run your workflows on any infrastructure. Cloud, on-prem, or containers.

Schedule cron jobs

Replace brittle cron jobs with reliable and observable workflows.

"We've been impressed by how lightweight and flexible DBOS is, the speed at which their team ships, and the level of support offered. We are excited to scale with DBOS."
Abhishek Das

CEO & Co-Founder, Yutori.ai

“The impact on observability has been huge...so we have a lot better statistics on how things fail, why things fail...It’s simplified our infrastructure.”

Devin Stein

Founder, Dosu.dev

50k

Customers served

10k+

Workflows per hour

Monitor workflows in real time
Spot issues fast
One-click workflow replay