DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Golang: Is It a Memory Leak?
  • Develop a Reverse Proxy With Caching in Go
  • 5 Best Node.js Practices to Develop Scalable and Robust Applications
  • Fixing OutOfMemoryErrors in Java Applications

Trending

  • Understanding Java Signals
  • Microsoft Azure Synapse Analytics: Scaling Hurdles and Limitations
  • Beyond ChatGPT, AI Reasoning 2.0: Engineering AI Models With Human-Like Reasoning
  • Unlocking the Potential of Apache Iceberg: A Comprehensive Analysis
  1. DZone
  2. Coding
  3. Languages
  4. How the Go Runtime Preempts Goroutines for Efficient Concurrency

How the Go Runtime Preempts Goroutines for Efficient Concurrency

Understanding cooperative scheduling and forced preemption techniques in the Go runtime to manage goroutines efficiently.

By 
Dinoja Padmanabhan user avatar
Dinoja Padmanabhan
·
May. 07, 25 · Analysis
Likes (0)
Comment
Save
Tweet
Share
1.6K Views

Join the DZone community and get the full member experience.

Join For Free

Go's lightweight concurrency model, built on goroutines and channels, has made it a favorite for building efficient, scalable applications. Behind the scenes, the Go runtime employs sophisticated mechanisms to ensure thousands (or even millions) of goroutines run fairly and efficiently. One such mechanism is goroutine preemption, which is crucial for ensuring fairness and responsiveness.

In this article, we'll dive into how the Go runtime implements goroutine preemption, how it works, and why it's critical for compute-heavy applications. We'll also use clear code examples to demonstrate these concepts.

What Are Goroutines and Why Do We Need Preemption?

A goroutine is Go's abstraction of a lightweight thread. Unlike heavy OS threads, a goroutine is incredibly memory-efficient — it starts with a small stack (typically 2 KB), which grows dynamically. The Go runtime schedules goroutines on a pool of OS threads, following an M scheduling model, where M OS threads map onto N goroutines.

While Go's cooperative scheduling usually suffices, there are scenarios where long-running or tight-loop goroutines can hog the CPU, starving other goroutines. Example:

Go
 
package main

func hogCPU() {
    for {
        // Simulating a CPU-intensive computation
        // This loop never yields voluntarily
    }
}

func main() {
    // Start a goroutine that hogs the CPU
    go hogCPU()

    // Start another goroutine that prints periodically
    go func() {
        for {
            println("Running...")
        }
    }()

    // Prevent main from exiting
    select {}
}


In the above code, hogCPU() runs indefinitely without yielding control, potentially starving the goroutine that prints messages (println). In earlier versions of Go (pre-1.14), such a pattern could lead to poor responsiveness, as the scheduler wouldn’t get a chance to interrupt the CPU-hogging goroutine.

How Goroutine Preemption Works in the Go Runtime

1. Cooperative Scheduling

Go's scheduler relies on cooperative scheduling, where goroutines voluntarily yield control at certain execution points:

Blocking operations, such as waiting on a channel:

Go
 
func blockingExample(ch chan int) {
    val := <-ch // Blocks here until data is sent on the channel
    println("Received:", val)
}


Function calls, which naturally serve as preemption points:

Go
 
func foo() {
    bar() // Control can yield here since it's a function call
}


While cooperative scheduling works for most cases, it fails for compute-heavy or tight-loop code that doesn't include any blocking operations or function calls.

2. Forced Preemption for Tight Loops

Starting with Go 1.14, forced preemption was introduced to handle scenarios where goroutines don’t voluntarily yield — for example, in tight loops. Let’s revisit the hogCPU() loop:

Go
 
func hogCPU() {
    for {
        // Simulating tight loop
    }
}


In Go 1.14+, the compiler automatically inserts preemption checks within such loops. These checks periodically verify whether the goroutine's execution should be interrupted. For example, the runtime monitors whether the preempt flag for the goroutine is set, and if so, the goroutine pauses execution, allowing the scheduler to run other goroutines.

3. Code Example: Preemption in Action

Here's a practical example of forced preemption in Go:

Go
 
package main

import (
    "time"
)

func tightLoop() {
    for i := 0; i < 1e10; i++ {
        if i%1e9 == 0 {
            println("Tight loop iteration:", i)
        }
    }
}

func printMessages() {
    for {
        println("Message from goroutine")
        time.Sleep(100 * time.Millisecond)
    }
}

func main() {
    go tightLoop()
    go printMessages()

    // Prevent main from exiting
    select {}
}


What Happens?

  • Without preemption, the tightLoop() goroutine could run indefinitely, starving printMessages().
  • With forced preemption (Go 1.14+), the runtime interrupts tightLoop() periodically via inserted preemption checks, allowing printMessages() to execute concurrently.

4. How the Runtime Manages Preemption

Preemption Flags

Each goroutine has metadata managed by the runtime, including a g.preempt flag. If the runtime detects that a goroutine has exceeded its time quota (e.g., it's executing a CPU-heavy computation), it sets the preempt flag for that goroutine. Preemption checks inserted by the compiler read this flag and pause the goroutine at predetermined safepoints.

Safepoints

Preemption only occurs at strategic safepoints, such as during function calls or other preemption-friendly execution locations. This allows the runtime to preserve memory consistency and avoid interrupting sensitive operations.

Preemption Example: Tight Loop Without Function Calls

Let’s look at a micro-optimized tight loop without function calls:

Go
 
func tightLoopWithoutCalls() {
    for i := 0; i < 1e10; i++ {
        // Simulating CPU-heavy operations
    }
}


For this code:

  • The Go compiler inserts preemption checks during the compilation phase.
  • These checks ensure fairness by periodically pausing execution and allowing other goroutines to run.

To see preemption in effect, you could monitor your application’s thread activity using profiling tools like pprof or visualize execution using Go's trace tool (go tool trace).

Garbage Collection and Preemption

Preemption also plays a key role in garbage collection (GC). For example, during a "stop-the-world" GC phase:

  1. The runtime sets the preempt flag for all goroutines.
  2. Goroutines pause execution at safepoints.
  3. The GC safely scans memory, reclaims unused objects, and resumes all goroutines once it's done.

This seamless integration ensures memory safety while maintaining concurrency performance.

Conclusion

Goroutine preemption is one of the innovations that make Go a compelling choice for building concurrent applications. While cooperative scheduling works for most workloads, forced preemption ensures fairness in compute-intensive scenarios. Whether you're writing tight loops, managing long-running computations, or balancing thousands of lightweight goroutines, you can rely on Go's runtime to handle scheduling and preemption seamlessly.

Preemption paired with Go's garbage collection mechanisms results in a robust runtime environment, ideal for responsive and scalable software.

applications garbage collection Go (programming language)

Opinions expressed by DZone contributors are their own.

Related

  • Golang: Is It a Memory Leak?
  • Develop a Reverse Proxy With Caching in Go
  • 5 Best Node.js Practices to Develop Scalable and Robust Applications
  • Fixing OutOfMemoryErrors in Java Applications

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: