uring

package module
v0.0.0-beta2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 15, 2026 License: MIT Imports: 20 Imported by: 0

README

uring

Go Reference Go Report Card Benchmarks Codecov

Go package for the kernel-facing io_uring boundary on Linux 6.18+.

Language: English | 简体中文 | Español | 日本語 | Français

Overview

uring is the kernel-facing boundary for Linux io_uring. It creates and starts rings, prepares SQEs, decodes CQEs, carries submission identity through user_data, and exposes buffer registration, multishot operations, and listener-setup primitives without turning them into a scheduler.

The package keeps the boundary explicit: kernel mechanics and observable completion facts live here; policy and composition live above it. Caller-side runtime code owns completion correlation, retry/backoff, handler and session routing, connection lifecycle, and terminal resource release.

The primary surfaces are:

  • Uring, the live ring handle and operation set
  • SQEContext, the submission identity carried in user_data
  • CQEView, the borrowed completion view returned by Wait
  • buffer provisioning through registered buffers and multi-size buffer groups

Installation

uring requires Linux kernel 6.18 or later. Check the running kernel first:

uname -r

uring assumes the 6.18+ baseline and carries no fallback branches for older kernels. Boot a supported kernel instead of expecting compatibility shims inside this package.

Debian 13's stable kernel track may still be below 6.18. See Debian 13 kernel upgrade for the backports path to a kernel that meets the requirement.

go get code.hybscloud.com/uring
Debian 13 kernel upgrade

Debian 13 ships kernel 6.12 in its stable track. The trixie-backports suite provides a Debian-packaged 6.18+ kernel. See SETUP.md for step-by-step instructions.

Troubleshooting

Ring creation may return ENOMEM, EPERM, or ENOSYS depending on memlock limits, sysctl settings, or kernel support. Container runtimes block io_uring syscalls by default. See SETUP.md for diagnosis and resolution.

Ring lifecycle

New returns an unstarted ring and eagerly constructs the context pools. Call Start before submitting operations; it registers ring resources and enables the ring. The example below submits a read, waits for the matching CQE, and uses iox.Classify so ErrWouldBlock stays a semantic no-progress result rather than a failure.

ring, err := uring.New(func(o *uring.Options) {
    o.Entries = uring.EntriesMedium
})
if err != nil {
    return err
}

if err := ring.Start(); err != nil {
    return err
}
defer ring.Stop()

fd := iofd.NewFD(int(file.Fd()))
buf := make([]byte, 4096)
ctx := uring.PackDirect(uring.IORING_OP_READ, 0, 0, 0).WithFD(fd)
if err := ring.Read(ctx, buf); err != nil {
    return err
}

cqes := make([]uring.CQEView, 64)
var backoff iox.Backoff

for {
    n, err := ring.Wait(cqes)
    switch iox.Classify(err) {
    case iox.OutcomeWouldBlock:
        backoff.Wait()
        continue
    case iox.OutcomeFailure:
        return err
    }
    if n == 0 {
        backoff.Wait()
        continue
    }

    backoff.Reset()
    for i := range n {
        cqe := cqes[i]
        if cqe.Op() != uring.IORING_OP_READ || cqe.FD() != fd {
            continue
        }
        if err := cqe.Err(); err != nil {
            return fmt.Errorf("uring read failed: %w", err)
        }
        handle(buf[:int(cqe.Res)])
        return nil
    }
}

Wait flushes pending submissions, then reaps completions. On single-issuer rings it also issues the kernel enter that keeps deferred task work moving once the SQ drains; the caller must serialize Wait, WaitDirect, and WaitExtended with other submit-state operations. If iox.Classify(err) yields iox.OutcomeWouldBlock, no completion is currently observable at the boundary.

Start and Stop form the ring lifecycle pair. Stop is idempotent and renders the ring permanently unusable; call it only after you have drained all in-flight operations, reaped outstanding CQEs, and quiesced live multishot subscriptions.

Types and operations

Type Role
Uring Ring setup, submission, completion reaping, and operation methods
Options Ring entries, registered-buffer budget, buffer-group scale, and completion visibility
SQEContext Compact submission identity stored in user_data
CQEView Borrowed completion record with decoded context accessors
ListenerOp Handle to a listener creation operation with FD and accept helpers
BundleIterator Iterates over buffers consumed in a bundle receive
IncrementalReceiver Manages incremental buffer-ring receives (IOU_PBUF_RING_INC)
ZCTracker Tracks the two-CQE zero-copy send lifecycle
ContextPools Pools for indirect and extended submission contexts
ZCRXReceiver Zero-copy receive lifecycle over a NIC RX queue
ZCRXConfig Configuration for a ZCRX receive instance
ZCRXHandler Callback interface for ZCRX data, errors, and shutdown
ZCRXBuffer Delivered zero-copy receive view with kernel refill on release

Operations:

Area Methods
Socket TCP4Socket, TCP6Socket, UDP4Socket, UDP6Socket, UDPLITE4Socket, UDPLITE6Socket, SCTP4Socket, SCTP6Socket, UnixSocket, SocketRaw, plus *Direct variants
Connection Bind, Listen, Accept, AcceptDirect, Connect, Shutdown
Socket I/O Receive, Send, RecvMsg, SendMsg, ReceiveBundle, ReceiveZeroCopy, Multicast, MulticastZeroCopy
Multishot AcceptMultishot, ReceiveMultishot, SubmitAcceptMultishot, SubmitAcceptDirectMultishot, SubmitReceiveMultishot, SubmitReceiveBundleMultishot
File I/O Read, Write, ReadV, WriteV, ReadFixed, WriteFixed, ReadvFixed, WritevFixed
File mgmt OpenAt, Close, Sync, Fallocate, FTruncate, Statx, RenameAt, UnlinkAt, MkdirAt, SymlinkAt, LinkAt
Xattr FGetXattr, FSetXattr, GetXattr, SetXattr
Transfer Splice, Tee, Pipe, SyncFileRange, FileAdvise
Timeout Timeout, TimeoutRemove, TimeoutUpdate, LinkTimeout
Cancel AsyncCancel, AsyncCancelFD, AsyncCancelOpcode, AsyncCancelAny, AsyncCancelAll
Poll PollAdd, PollRemove, PollUpdate, PollAddLevel, PollAddMultishot, PollAddMultishotLevel
Async EpollWait, FutexWait, FutexWake, FutexWaitV, Waitid
Ring msg MsgRing, MsgRingFD, FixedFdInstall, FilesUpdate
Cmd UringCmd, UringCmd128, Nop, Nop128

Nop128 and UringCmd128 require a ring created with Options.SQE128 and kernel support for the corresponding opcodes. Without both, they return ErrNotSupported.

Uring.Close submits IORING_OP_CLOSE for a target file descriptor. It is not a ring teardown method.

Context transport

SQEContext is the primary identity token. In direct mode it packs the opcode, SQE flags, buffer-group ID, and file descriptor into a single 64-bit value.

sqeCtx := uring.ForFD(fd).
    WithOp(uring.IORING_OP_RECV).
    WithBufGroup(groupID)

The three context modes are:

Mode Representation Typical use
Direct Inline 64-bit payload Common submit and reap path, zero allocation
Indirect Pointer to IndirectSQE Full SQE payload when 64 bits are not enough
Extended Pointer to ExtSQE Full SQE plus 64 bytes of user data

For the common path, start with ForFD or PackDirect and attach only the bits you need to see again at completion time. WithFlags replaces the entire flag set, so compute unions before calling it.

When you need caller-owned metadata beyond the 64-bit direct layout, borrow an ExtSQE, write into its UserData through Ctx*Of or ViewCtx*, and pack it back into an SQEContext. Prefer scalar payloads. If a raw overlay or typed view stores Go pointers, interfaces, func values, slices, strings, maps, chans, or structs containing them, keep the live roots outside UserData; the GC does not trace those raw bytes.

ext := ring.ExtSQE()
meta := uring.CtxV1Of(ext)
meta.Val1 = requestSeq

sqeCtx := uring.PackExtended(ext)
fmt.Printf("sqe context mode=%d seq=%d\n", sqeCtx.Mode(), meta.Val1)

NewContextPools returns pools that are ready to use. Call Reset only once all borrowed contexts have been returned and you want to reuse the pool set.

Completion dispatch with CQEView

There is no separate completion-context type. All completion dispatch goes through CQEView; call cqe.Context() to recover the original submission token.

cqes := make([]uring.CQEView, 64)

n, err := ring.Wait(cqes)
switch iox.Classify(err) {
case iox.OutcomeWouldBlock:
    return iox.ErrWouldBlock
case iox.OutcomeFailure:
    return err
}
if n == 0 {
    return iox.ErrWouldBlock
}

for i := 0; i < n; i++ {
    cqe := cqes[i]
    if err := cqe.Err(); err != nil {
        return fmt.Errorf("completion failed: op=%d fd=%d: %w", cqe.Op(), cqe.FD(), err)
    }

    switch cqe.Op() {
    case uring.IORING_OP_ACCEPT:
        fmt.Printf("accepted fd=%d\n", cqe.Res)
    case uring.IORING_OP_RECV:
        if cqe.HasBuffer() {
            fmt.Printf("buffer id=%d\n", cqe.BufID())
        }
        if cqe.Extended() {
            seq := uring.CtxV1Of(cqe.ExtSQE()).Val1
            fmt.Printf("request seq=%d\n", seq)
        }
    }
}

CQEView decodes the matching context mode on demand at completion time. CQEView, IndirectSQE, ExtSQE, and borrowed buffers must not outlive their documented lifetimes.

Buffer provisioning

uring has three practical buffer paths. Registered buffers are pinned during ring setup and used by fixed-buffer file I/O. Provided buffer rings let the kernel choose a receive buffer and report the selected buffer ID in the CQE. Bundle receives consume a contiguous logical range of provided buffers and expose that range through BundleIterator.

  • fixed-size provided buffers through ReadBufferSize and ReadBufferNum
  • multi-size buffer groups through MultiSizeBuffer
  • registered fixed buffers through LockedBufferMem, RegisteredBuffer, ReadFixed, and WriteFixed

For most systems the configuration helpers are the easiest entry point:

opts := uring.OptionsForSystem(uring.MachineMemory4GB)
ring, err := uring.New(func(o *uring.Options) {
    *o = opts
})

Use OptionsForBudget to start from an explicit memory budget, or BufferConfigForBudget to inspect the tier layout chosen for a given budget:

cfg, scale := uring.BufferConfigForBudget(256 * uring.MiB)
fmt.Printf("buffer tiers=%+v scale=%d\n", cfg, scale)

Fixed-buffer I/O uses a registered buffer by index. The returned slice is ring-owned memory; keep it live until the fixed operation completes:

buf := ring.RegisteredBuffer(0)
copy(buf, payload)

fd := iofd.NewFD(int(file.Fd()))
ctx := uring.PackDirect(uring.IORING_OP_WRITE_FIXED, 0, 0, 0).WithFD(fd)
if err := ring.WriteFixed(ctx, 0, len(payload)); err != nil {
    return err
}

For socket receive with kernel buffer selection, pass nil as the receive buffer and request the size class you want. The completion reports which buffer was selected:

recvCtx := uring.PackDirect(uring.IORING_OP_RECV, 0, 0, 0)

if err := ring.Receive(recvCtx, &socketFD, nil, uring.WithReadBufferSize(uring.BufferSizeSmall)); err != nil {
    return err
}

// Later, after Wait returns the matching CQE:
if cqe.HasBuffer() {
    fmt.Printf("kernel selected group=%d id=%d\n", cqe.BufGroup(), cqe.BufID())
}

Bundle receives use the same provided-buffer storage but may consume more than one buffer in a single CQE. Process the iterator, then recycle the consumed slots:

if err := ring.ReceiveBundle(recvCtx, &socketFD, uring.WithReadBufferSize(uring.BufferSizeSmall)); err != nil {
    return err
}

if it, ok := ring.BundleIterator(cqe, cqe.BufGroup()); ok {
    for buf := range it.All() {
        handle(buf)
    }
    it.Recycle(ring)
}

Registered buffers require pinned memory. If large buffer registration fails, increase RLIMIT_MEMLOCK or use a smaller memory budget.

Multishot and listener operations

AcceptMultishot, ReceiveMultishot, SubmitAcceptMultishot, SubmitAcceptDirectMultishot, SubmitReceiveMultishot, and SubmitReceiveBundleMultishot each submit a multishot socket operation.

CQE routing policy stays outside the package. Listener setup progresses through DecodeListenerCQE, PrepareListenerBind, PrepareListenerListen, and SetListenerReady; the caller decides how to dispatch completions and when to stop the chain.

Architecture implementation

The implementation sits at this boundary:

  1. New builds a disabled kernel ring, constructs context pools, and selects a buffer strategy.
  2. Start registers buffers and enables the ring for the 6.18+ baseline.
  3. Operation methods express intent by writing SQEs.
  4. Wait flushes submissions and returns borrowed CQE views.
  5. Caller-side runtime code decides scheduling, retries, parking, connection/session routing, and terminal resource policy.

This keeps uring focused on kernel-facing mechanics and preserves completion meaning across the boundary.

Runtime boundary

Runtime layers above uring should use it as the kernel backend, not as a scheduler. The ideal seam is one-way: uring prepares SQEs, reaps CQEs, preserves user_data, exposes CQE res and flags, and reports ownership facts; caller-side runtime code correlates those observations with its own tokens, applies retry/backoff, routes handlers and sessions, batches submissions, and releases terminal resources.

A runtime bridge can consume Extended-mode CQEs when abstract execution needs completion facts. A connection-scoped runtime can also poll raw Extended CQEs directly when it needs the CQE result, flags, buffer ID, and encoded token before reducing the event to handler callbacks.

Context and abstract-execution layers above this boundary do not change uring's kernel-boundary role.

Application-layer patterns

uring exposes kernel mechanics; scheduling, retry, connection tracking, and protocol interpretation belong in the layers above it. The patterns below describe the boundary a caller-side runtime must preserve.

Ring-owning event loop

In single-issuer mode (the default), one goroutine serializes all submit-state operations. A typical loop submits pending work, applies caller-owned iox.Backoff when Wait reports no observable progress, and dispatches completions:

func runLoop(ring *uring.Uring, stop <-chan struct{}) error {
    cqes := make([]uring.CQEView, 64)
    var backoff iox.Backoff
    for {
        select {
        case <-stop:
            return nil
        default:
        }

        n, err := ring.Wait(cqes)
        switch iox.Classify(err) {
        case iox.OutcomeWouldBlock:
            backoff.Wait()
            continue
        case iox.OutcomeFailure:
            return err
        }
        if n == 0 {
            backoff.Wait()
            continue
        }

        backoff.Reset()
        for i := range n {
            dispatch(ring, cqes[i])
        }
    }
}

All ring methods, including Send, Receive, AcceptMultishot, and Wait, run on this goroutine. Work from other goroutines enters the loop through a channel or a lock-free queue, not by calling ring methods directly. iox.Backoff stays caller-owned: call backoff.Wait() on iox.OutcomeWouldBlock or when Wait returns no CQEs, and backoff.Reset() after any batch with n > 0.

Multishot subscription lifecycle

A multishot operation produces a stream of CQEs until the kernel sends a final one (without IORING_CQE_F_MORE). Caller-side code routes each CQE through the returned subscription in the same serialized completion loop before falling back to the rest of the dispatcher:

handler := uring.NewMultishotSubscriber().
    OnStep(func(step uring.MultishotStep) uring.MultishotAction {
        if step.Err != nil {
            return uring.MultishotStop
        }
        connFD := iofd.FD(step.CQE.Res)
        registerConnection(connFD)
        return uring.MultishotContinue
    }).
    OnStop(func(err error, cancelled bool) {
        if !cancelled {
            resubscribeAccept()
        }
    })

sub, err := ring.AcceptMultishot(acceptCtx, handler.Handler())
if err != nil {
    return err
}

// Dispatch in the same serialized completion loop. If caller code stores
// copied CQEs beyond this loop, it must keep its own route state.
for i := range n {
    if sub.HandleCQE(cqes[i]) {
        continue
    }
    dispatch(ring, cqes[i])
}

Each OnStep callback observes one MultishotStep: return MultishotContinue to keep the stream live, or MultishotStop to request cancellation. If callbacks remain enabled until the terminal observation, OnStop runs at most once with the final error and a cancelled flag; on the step itself, step.Cancelled distinguishes the specific -ECANCELED kernel verdict from other failures at the boundary. Both callbacks are the builder-side projection of the MultishotHandler interface (OnMultishotStep / OnMultishotStop); implement that interface directly when you want an explicit handler. HandleCQE is for immediate dispatch in the caller's serialized completion loop; if caller code stores copied CQEs beyond that loop, it must keep its own route state and reject observations for retired subscriptions. On default single-issuer rings, call Cancel / Unsubscribe from the ring owner or otherwise serialize them with submit, Wait, WaitDirect, WaitExtended, Stop, and ResizeRings. On MultiIssuers rings, the shared-submit path serializes their cancel SQEs.

Per-connection state with typed contexts

Extended contexts carry per-connection references through the submit → complete round-trip without a global lookup table:

type ConnState struct {
    Addr    netip.AddrPort
    Created int64
}

ext := ring.ExtSQE()
ctx := uring.Ctx1V1Of[ConnState](ext)
ctx.Ref1 = connState
ctx.Val1 = sequenceNumber

sqeCtx := uring.PackExtended(ext)
if err := ring.Send(sqeCtx, &fd, payload); err != nil {
    ring.PutExtSQE(ext)
    return err
}

At completion time, recover the state through the same typed view:

ext := cqe.ExtSQE()
ctx := uring.Ctx1V1Of[ConnState](ext)
conn := ctx.Ref1
seq := ctx.Val1
ring.PutExtSQE(ext)

Keep live Go pointer roots reachable outside UserData. The GC does not trace those raw bytes. The sidecar root set attached to each ExtSQE slot handles this for internal multishot and listener protocols, but caller-side runtime code that places typed refs must keep them reachable independently.

Deadline composition

LinkTimeout attaches a deadline to the preceding SQE through an IOSQE_IO_LINK chain. The operation and the timeout race: exactly one completes, and the other is cancelled.

recvCtx := uring.ForFD(fd).
    WithOp(uring.IORING_OP_RECV).
    WithBufGroup(group)

if err := ring.Receive(recvCtx, &fd, nil, uring.WithFlags(uring.IOSQE_IO_LINK)); err != nil {
    return err
}

timeoutCtx := uring.PackDirect(uring.IORING_OP_LINK_TIMEOUT, 0, 0, 0)
if err := ring.LinkTimeout(timeoutCtx, 5*time.Second); err != nil {
    return err
}

The caller-side runtime handles both outcomes: a successful receive cancels the timeout, and a fired timeout cancels the receive. Both produce CQEs that the dispatch loop must observe.

TCP usage patterns

These are the shortest flows, meant to be read alongside the tests:

Scenario Main APIs Reference
Echo server ListenerManager, AcceptMultishot, ReceiveMultishot, Send listener_example_test.go, examples/multishot_test.go, examples/echo_test.go
Client TCP4Socket, Connect, Send, Receive socket_integration_linux_test.go
TCP echo server

ListenerManager prepares the socket → bind → listen chain for you. The listener handler's bool-return callbacks are control-flow hooks: true advances to the next setup phase, false aborts before it. Once the listener is live, start multishot accept and multishot receive on the connection FDs.

pool := uring.NewContextPools(32)
manager := uring.NewListenerManager(ring, pool)

listenerOp, err := manager.ListenTCP4(addr, 128, listenerHandler)
if err != nil {
    return err
}

acceptSub, err := listenerOp.AcceptMultishot(acceptHandler)
if err != nil {
    return err
}
defer acceptSub.Cancel()

recvCtx := uring.ForFD(clientFD).WithBufGroup(readGroup)
recvSub, err := ring.ReceiveMultishot(recvCtx, recvHandler)
if err != nil {
    return err
}
defer recvSub.Cancel()

listener_example_test.go covers listener setup with multishot accept, examples/multishot_test.go covers handler-side multishot receive CQEs, and examples/echo_test.go covers the full loopback echo flow.

TCP client

Create a socket, wait for the IORING_OP_SOCKET completion, then wrap the returned FD in an iofd.FD for Connect, Send, and Receive.

clientCtx := uring.PackDirect(uring.IORING_OP_SOCKET, 0, 0, 0)
if err := ring.TCP4Socket(clientCtx); err != nil {
    return err
}

clientFD := iofd.NewFD(int(socketCQE.Res))

connectCtx := uring.PackDirect(uring.IORING_OP_CONNECT, 0, 0, int32(clientFD))
if err := ring.Connect(connectCtx, remoteAddr); err != nil {
    return err
}

sendCtx := uring.PackDirect(uring.IORING_OP_SEND, 0, 0, int32(clientFD))
if err := ring.Send(sendCtx, &clientFD, payload); err != nil {
    return err
}

recvCtx := uring.PackDirect(uring.IORING_OP_RECV, 0, 0, int32(clientFD))
if err := ring.Receive(recvCtx, &clientFD, buf); err != nil {
    return err
}

After each submit, reuse the Wait loop from the ring lifecycle section to observe the matching completion. socket_integration_linux_test.go at the package level covers the connect/send cycle.

Zero-copy receive (ZCRX)

ZCRXReceiver drives zero-copy receive from a NIC hardware RX queue through io_uring.

NewZCRXReceiver is wired for rings with 32-byte CQEs (IORING_SETUP_CQE32). The current Options surface does not expose that setup flag, so rings created through the standard New path cause this constructor to return ErrNotSupported. Until a CQE32 setup path is exposed, this section documents the receiver boundary contract rather than a runnable public setup recipe.

Lifecycle
  1. With a CQE32-enabled ring, create the receiver with NewZCRXReceiver. The constructor registers the ZCRX interface queue, maps the refill area, and prepares the refill ring.
  2. Call Start to submit the extended RECV_ZC operation.
  3. On the CQE dispatch path, ZCRX completions route to the ZCRXHandler:
    • OnData delivers a ZCRXBuffer pointing into the NIC-mapped area. Call Release when done to return the slot to the kernel. Return false to request a best-effort stop.
    • OnError delivers CQE errors. Return false to request a best-effort stop.
    • OnStopped fires once during terminal retirement, before the state reaches Stopped.
  4. Call Stop to submit an async cancel. The receiver transitions through StoppingRetiringStopped.
  5. Poll Stopped until it returns true, stop the owning ring, then call Close to release the mapped area and the refill-ring mapping.
State machine
Idle → Active → Stopping → Retiring → Stopped

Stop reverts to Active if cancel submission fails. Close is idempotent.

Handler contract
  • OnData and OnError are called serially from the CQE dispatch goroutine.
  • Release is single-producer; call it only from the dispatch goroutine.
  • Stop must not race with CQE dispatch. The caller is responsible for this serialization.

Examples

The example tests in uring/examples/ show the API in practice.

  • multishot_test.go, multishot accept, multishot receive, and subscription stop behavior
  • file_io_test.go, basic file reads, writes, and batching
  • fixed_buffers_test.go, registered buffers and fixed-buffer I/O
  • vectored_io_test.go, vectored read and write operations
  • splice_tee_test.go, splice and tee zero-copy data transfer
  • zerocopy_test.go, zero-copy send paths and completion tracking
  • poll_test.go, poll-based readiness workflows
  • buffer_ring_test.go, buffer ring provisioning and multi-size buffer groups
  • context_test.go, direct, indirect, and extended SQEContext flows plus CQEView access
  • echo_test.go, TCP echo server and UDP ping-pong flows
  • timeout_linux_test.go, timeout and linked-timeout operations

The package-level listener_example_test.go covers listener creation with multishot accept, and socket_integration_linux_test.go covers the TCP client connect/send flow.

Operational notes

  • Enable NotifySucceed when you need a visible CQE for every successful operation.
  • ring.Features reports actual SQ/CQ entry counts, SQE slot width, and the byte order used to interpret user_data.
  • Leave MultiIssuers unset for the default single-issuer configuration (SINGLE_ISSUER + DEFER_TASKRUN) when a single execution path serializes submit-state operations (submit, Wait, WaitDirect, WaitExtended, Stop, and ResizeRings). Set it only when multiple goroutines need concurrent submission or concurrent calls to Wait, WaitDirect, or WaitExtended; this switches the ring to the shared-submit COOP_TASKRUN configuration.
  • EpollWait requires timeout to remain 0; use LinkTimeout when you need a deadline.
  • Release or discard borrowed completion views and pooled contexts promptly.
  • ListenerOp.Close closes the listener FD immediately. If a setup CQE is still pending, drain it first, then call Close again to return the borrowed ExtSQE to the pool.

Platform support

uring targets Go 1.26+ and Linux 6.18+ on the real kernel-backed path. Most source files and example tests carry a //go:build linux guard. Darwin files provide compile stubs for the shared surface only; Linux-only capabilities remain Linux-only and do not change the Linux runtime baseline.

License

MIT, see LICENSE.

©2026 Hayabusa Cloud Co., Ltd.

Documentation

Overview

Package uring provides the kernel-boundary `io_uring` surface for Linux 6.18+. `uring` assumes the 6.18+ baseline and carries no fallback branches for older kernels. Its core Linux `io_uring` implementation was refactored from `code.hybscloud.com/sox` into this dedicated package. It prepares SQEs, decodes CQEs, transports submission context through `user_data`, and exposes kernel-boundary facts. Dispatch, retry, completion correlation, and connection/session orchestration stay in caller-side runtime code above this boundary.

A typical caller starts the ring, submits an operation, and then treats the completion queue as the source of truth for kernel results. Semantic no-progress conditions such as iox.ErrWouldBlock are classified through iox.Classify instead of being treated as ordinary failures.

ring, err := uring.New(func(opt *uring.Options) {
    opt.Entries = uring.EntriesMedium
})
if err != nil {
    return err
}
if err := ring.Start(); err != nil {
    return err
}
defer ring.Stop()

fd := iofd.NewFD(int(file.Fd()))
buf := make([]byte, 4096)
ctx := uring.PackDirect(uring.IORING_OP_READ, 0, 0, 0).WithFD(fd)
if err := ring.Read(ctx, buf); err != nil {
    return err
}

cqes := make([]uring.CQEView, 64)
var backoff iox.Backoff

for {
    n, err := ring.Wait(cqes)
    switch iox.Classify(err) {
    case iox.OutcomeWouldBlock:
        backoff.Wait()
        continue
    case iox.OutcomeFailure:
        return err
    }
    if n == 0 {
        backoff.Wait()
        continue
    }

    backoff.Reset()
    for i := range n {
        cqe := cqes[i]
        if cqe.Op() != uring.IORING_OP_READ || cqe.FD() != fd {
            continue
        }
        if err := cqe.Err(); err != nil {
            return fmt.Errorf("uring read failed: %w", err)
        }
        handle(buf[:int(cqe.Res)])
        return nil
    }
}

Uring.SubmitAcceptMultishot, Uring.SubmitReceiveMultishot, and Uring.SubmitReceiveBundleMultishot submit raw multishot SQEs and keep the kernel-boundary flow explicit. Uring.AcceptMultishot and Uring.ReceiveMultishot use the same kernel path and return a MultishotSubscription for caller-owned callback dispatch in the same serialized completion loop. If caller code keeps copied CQEs beyond that loop, it must keep its own route state and reject observations for retired subscriptions.

sqeCtx := uring.ForFD(listenerFD)
sub, err := ring.AcceptMultishot(sqeCtx, handler)
if err != nil {
    return err
}

// Process CQEs in the same serialized completion loop.
for i := range n {
    if sub.HandleCQE(cqes[i]) {
        continue
    }
    dispatch(ring, cqes[i])
}

// Cancel when done
// On single-issuer rings, call Cancel from the ring owner or otherwise
// serialize it with submit, Wait, WaitDirect, WaitExtended, Stop, and ResizeRings.
if err := sub.Cancel(); err != nil {
    return err
}

Listener setup advances with DecodeListenerCQE, PrepareListenerBind, PrepareListenerListen, and SetListenerReady. ListenerManager is a thin convenience for the initial SOCKET submission and returns a ListenerOp. If ListenerOp.Close races a pending listener setup CQE, drain that CQE before the final Close that returns the pooled listener context.

pool := uring.NewContextPools(16)
manager := uring.NewListenerManager(ring, pool)

addr := &net.TCPAddr{IP: net.IPv4(127, 0, 0, 1), Port: 8080}
op, err := manager.ListenTCP4(addr, 128, handler)

// Caller decodes CQEs and chains bind→listen via Prepare helpers
// After LISTEN completes, start accepting:
acceptSub, err := op.AcceptMultishot(acceptHandler)

Extended-mode raw `UserData` is caller-beware storage. Prefer scalar payloads there; if raw overlays or typed context views place Go pointers, interfaces, func values, maps, slices, strings, chans, or structs containing them in those bytes, caller code must keep the live roots outside `UserData`.

SQEContext packs submission metadata into `user_data`.

Direct mode layout (inline context, zero allocation):

┌─────────┬─────────┬──────────────┬────────────────────────────┬────┐
│ Op (8b) │Flags(8b)│ BufGrp (16b) │        FD (30b)            │Mode│
└─────────┴─────────┴──────────────┴────────────────────────────┴────┘
  Bits 0-7  Bits 8-15  Bits 16-31     Bits 32-61              Bits 62-63

Mode bits (62-63): 00=Direct, 01=Indirect (64B ptr), 10=Extended (128B ptr)

Pack context for submission:

ctx := uring.PackDirect(
    uring.IORING_OP_RECV,   // Op: operation type
    0,                      // Flags: SQE flags
    bufferGroupID,          // BufGroup: for buffer selection
    clientFD,               // FD: target file descriptor
)

If `IOSQE_FIXED_FILE` is set, the FD field stores the registered file index instead of a raw file descriptor.

Or use the fluent builder:

ctx := uring.ForFD(clientFD).WithOp(uring.IORING_OP_RECV).WithBufGroup(groupID)

Handler Patterns

Handler helpers provide convenience step/action adapters. They do not change the underlying CQE facts and are optional.

Subscriber pattern (functional callbacks):

handler := uring.NewMultishotSubscriber().
    OnStep(func(step uring.MultishotStep) uring.MultishotAction {
        if step.Err == nil {
            return uring.MultishotContinue
        }
        return uring.MultishotStop
    }).
    OnStop(func(err error, cancelled bool) {
        log.Println("stopped", err, cancelled)
    })

Noop embedding pattern (override only needed methods):

type myHandler struct {
    uring.NoopMultishotHandler
    connections int
}

func (h *myHandler) OnMultishotStep(step uring.MultishotStep) uring.MultishotAction {
    if step.Err == nil && step.CQE.Res >= 0 {
        h.connections++
        return uring.MultishotContinue
    }
    return h.NoopMultishotHandler.OnMultishotStep(step)
}

Handlers either return `MultishotContinue` to keep a live subscription, or `MultishotStop` to request cancellation after the current step. The request is local until the cancel SQE is successfully enqueued.

Multishot Subscription Lifecycle

A live subscription keeps its submitted ExtSQE until the terminal CQE for that operation is handled. The kernel may emit multiple CQEs for the same submission before that terminal CQE; each non-terminal CQE carries `IORING_CQE_F_MORE`. The package keeps the submitted ExtSQE and encoded `user_data` associated with the subscription, and returns the ExtSQE to its pool only after handling a CQE without `HasMore()`. Caller-side runtime code should treat intermediate CQEs as progress for the live subscription and the final CQE as the point where the subscription can be retired.

Runtime Boundary

uring is a kernel boundary, not a scheduler. It owns SQE encoding, CQE observation, `user_data` identity, capability exposure, and kernel-facing lifetimes. Runtime policy, connection routing, batching, retries, parking, and terminal resource release belong above this package.

Buffer Groups

Buffer groups enable kernel-side buffer selection for receive operations. The kernel picks an available buffer from the group at completion time; userspace does not select or assign buffers per receive. The package exposes three practical buffer-management paths: registered fixed buffers for fixed-buffer file I/O, provided buffers selected by the kernel, and bundle receives over contiguous ranges of provided buffers.

opts := uring.OptionsForBudget(256 * uring.MiB)
ring, err := uring.New(func(opt *uring.Options) {
    *opt = opts
})
if err != nil {
    return err
}

cfg, scale := uring.BufferConfigForBudget(256 * uring.MiB)
fmt.Printf("buffer tiers=%+v scale=%d\n", cfg, scale)

A registered fixed buffer is ring-owned memory addressed by index. Keep the buffer live until the fixed operation completes.

fd := iofd.NewFD(int(file.Fd()))
buf := ring.RegisteredBuffer(0)
copy(buf, payload)

writeCtx := uring.PackDirect(uring.IORING_OP_WRITE_FIXED, 0, 0, 0).WithFD(fd)
if err := ring.WriteFixed(writeCtx, 0, len(payload)); err != nil {
    return err
}

For socket receive with kernel buffer selection, pass nil as the receive buffer and request the desired read-buffer size class. The matching CQE reports which buffer group and buffer ID were consumed.

recvCtx := uring.PackDirect(uring.IORING_OP_RECV, 0, 0, 0)
if err := ring.Receive(recvCtx, &socketFD, nil, uring.WithReadBufferSize(uring.BufferSizeSmall)); err != nil {
    return err
}

if cqe.HasBuffer() {
    fmt.Printf("kernel selected group=%d id=%d\n", cqe.BufGroup(), cqe.BufID())
}

Bundle receives may consume more than one provided buffer in one CQE. Process the iterator and then recycle the consumed slots.

if err := ring.ReceiveBundle(recvCtx, &socketFD, uring.WithReadBufferSize(uring.BufferSizeSmall)); err != nil {
    return err
}

if it, ok := ring.BundleIterator(cqe, cqe.BufGroup()); ok {
    for buf := range it.All() {
        handle(buf)
    }
    it.Recycle(ring)
}

Supported Operations

Socket creation:

Socket operations:

File operations:

Control operations:

Registration:

Ring management:

Capability queries:

Zero-copy receive (ZCRX):

  • Uring.QueryZCRX, Uring.RegisterZCRXIfq
  • NewZCRXReceiver is wired for rings created with 32-byte CQEs. The current Options surface does not expose `IORING_SETUP_CQE32`, so rings created through the standard New path return ErrNotSupported from this constructor. Until a CQE32 setup path is exposed, the receiver docs describe the boundary contract rather than a runnable public setup recipe.

Performance

The hot submit and reap paths are designed to remain zero-allocation. See the benchmark tests for current machine-specific numbers.

Ring Setup

Create and start an io_uring instance:

ring, err := uring.New(func(opt *uring.Options) {
    opt.Entries = uring.EntriesMedium // 2048 entries
})
if err != nil {
    return err
}
if err := ring.Start(); err != nil {
    return err
}

Memory Barriers

The package uses dwcas.BarrierAcquire and dwcas.BarrierRelease for SQ/CQ ring synchronization. On amd64 (TSO), these are compiler barriers. On arm64, they emit DMB ISHLD/ISHST instructions. User code does not manage these barriers.

Dependencies

Index

Constants

View Source
const (
	KiB = 1 << 10
	MiB = 1 << 20
	GiB = 1 << 30
)

Memory size constants for budget specification.

View Source
const (
	MachineMemory512MB = 512 * MiB
	MachineMemory1GB   = 1 * GiB
	MachineMemory2GB   = 2 * GiB
	MachineMemory4GB   = 4 * GiB
	MachineMemory8GB   = 8 * GiB
	MachineMemory16GB  = 16 * GiB
	MachineMemory32GB  = 32 * GiB
	MachineMemory64GB  = 64 * GiB
	MachineMemory96GB  = 96 * GiB
	MachineMemory128GB = 128 * GiB
)

Common machine memory sizes for OptionsForSystem.

View Source
const (
	BufferSizePico   = iobuf.BufferSizePico   // 32 B
	BufferSizeNano   = iobuf.BufferSizeNano   // 128 B
	BufferSizeMicro  = iobuf.BufferSizeMicro  // 512 B
	BufferSizeSmall  = iobuf.BufferSizeSmall  // 2 KiB
	BufferSizeMedium = iobuf.BufferSizeMedium // 8 KiB
	BufferSizeBig    = iobuf.BufferSizeBig    // 32 KiB
	BufferSizeLarge  = iobuf.BufferSizeLarge  // 128 KiB
	BufferSizeGreat  = iobuf.BufferSizeGreat  // 512 KiB
	BufferSizeHuge   = iobuf.BufferSizeHuge   // 2 MiB
	BufferSizeVast   = iobuf.BufferSizeVast   // 8 MiB
	BufferSizeGiant  = iobuf.BufferSizeGiant  // 32 MiB
	BufferSizeTitan  = iobuf.BufferSizeTitan  // 128 MiB
)

Buffer size constants re-exported from iobuf for API compatibility. These follow a power-of-4 progression starting at 32 bytes.

View Source
const (
	EPERM           = uintptr(zcall.EPERM)
	EINTR           = uintptr(zcall.EINTR)
	EAGAIN          = uintptr(zcall.EAGAIN)
	EWOULDBLOCK     = EAGAIN
	ENOMEM          = uintptr(zcall.ENOMEM)
	EACCES          = uintptr(zcall.EACCES)
	EFAULT          = uintptr(zcall.EFAULT)
	EBUSY           = uintptr(zcall.EBUSY)
	EEXIST          = uintptr(zcall.EEXIST)
	ENAMETOOLONG    = uintptr(zcall.ENAMETOOLONG)
	ENODEV          = uintptr(zcall.ENODEV)
	EINVAL          = uintptr(zcall.EINVAL)
	EPIPE           = uintptr(zcall.EPIPE)
	EMFILE          = uintptr(zcall.EMFILE)
	ENFILE          = uintptr(zcall.ENFILE)
	ENOSYS          = uintptr(zcall.ENOSYS)
	ENOTSUP         = uintptr(zcall.ENOTSUP)
	EINPROGRESS     = uintptr(zcall.EINPROGRESS)
	EALREADY        = uintptr(zcall.EALREADY)
	ENOTSOCK        = uintptr(zcall.ENOTSOCK)
	EDESTADDRREQ    = uintptr(zcall.EDESTADDRREQ)
	EMSGSIZE        = uintptr(zcall.EMSGSIZE)
	EPROTOTYPE      = uintptr(zcall.EPROTOTYPE)
	ENOPROTOOPT     = uintptr(zcall.ENOPROTOOPT)
	EPROTONOSUPPORT = uintptr(zcall.EPROTONOSUPPORT)
	EOPNOTSUPP      = uintptr(zcall.EOPNOTSUPP)
	EAFNOSUPPORT    = uintptr(zcall.EAFNOSUPPORT)
	EADDRINUSE      = uintptr(zcall.EADDRINUSE)
	EADDRNOTAVAIL   = uintptr(zcall.EADDRNOTAVAIL)
	ENETDOWN        = uintptr(zcall.ENETDOWN)
	ENETUNREACH     = uintptr(zcall.ENETUNREACH)
	ENETRESET       = uintptr(zcall.ENETRESET)
	ECONNABORTED    = uintptr(zcall.ECONNABORTED)
	ECONNRESET      = uintptr(zcall.ECONNRESET)
	ENOBUFS         = uintptr(zcall.ENOBUFS)
	EISCONN         = uintptr(zcall.EISCONN)
	ENOTCONN        = uintptr(zcall.ENOTCONN)
	ESHUTDOWN       = uintptr(zcall.ESHUTDOWN)
	ETIMEDOUT       = uintptr(zcall.ETIMEDOUT)
	ECONNREFUSED    = uintptr(zcall.ECONNREFUSED)
	EHOSTDOWN       = uintptr(zcall.EHOSTDOWN)
	EHOSTUNREACH    = uintptr(zcall.EHOSTUNREACH)
	ECANCELED       = uintptr(zcall.ECANCELED)
)

Errno constants aliased from zcall for architecture-safe error handling.

View Source
const (
	EntriesPico   = 1 << 3  // 8 entries
	EntriesNano   = 1 << 5  // 32 entries
	EntriesMicro  = 1 << 7  // 128 entries
	EntriesSmall  = 1 << 9  // 512 entries
	EntriesMedium = 1 << 11 // 2048 entries
	EntriesLarge  = 1 << 13 // 8192 entries
	EntriesHuge   = 1 << 15 // 32768 entries
)

Uring entry count constants define submission queue sizes. The values scale by powers of four: 8, 32, 128, 512, 2048, 8192, and 32768.

View Source
const (
	IORING_SETUP_IOPOLL             = zcall.IORING_SETUP_IOPOLL
	IORING_SETUP_SQPOLL             = zcall.IORING_SETUP_SQPOLL
	IORING_SETUP_SQ_AFF             = zcall.IORING_SETUP_SQ_AFF
	IORING_SETUP_CQSIZE             = zcall.IORING_SETUP_CQSIZE
	IORING_SETUP_CLAMP              = zcall.IORING_SETUP_CLAMP
	IORING_SETUP_ATTACH_WQ          = zcall.IORING_SETUP_ATTACH_WQ
	IORING_SETUP_R_DISABLED         = zcall.IORING_SETUP_R_DISABLED
	IORING_SETUP_SUBMIT_ALL         = zcall.IORING_SETUP_SUBMIT_ALL
	IORING_SETUP_COOP_TASKRUN       = zcall.IORING_SETUP_COOP_TASKRUN
	IORING_SETUP_TASKRUN_FLAG       = zcall.IORING_SETUP_TASKRUN_FLAG
	IORING_SETUP_SQE128             = zcall.IORING_SETUP_SQE128
	IORING_SETUP_CQE32              = zcall.IORING_SETUP_CQE32
	IORING_SETUP_SINGLE_ISSUER      = zcall.IORING_SETUP_SINGLE_ISSUER
	IORING_SETUP_DEFER_TASKRUN      = zcall.IORING_SETUP_DEFER_TASKRUN
	IORING_SETUP_NO_MMAP            = zcall.IORING_SETUP_NO_MMAP
	IORING_SETUP_REGISTERED_FD_ONLY = zcall.IORING_SETUP_REGISTERED_FD_ONLY
	IORING_SETUP_NO_SQARRAY         = zcall.IORING_SETUP_NO_SQARRAY
	IORING_SETUP_HYBRID_IOPOLL      = zcall.IORING_SETUP_HYBRID_IOPOLL
	IORING_SETUP_CQE_MIXED          = zcall.IORING_SETUP_CQE_MIXED // Allow both 16b and 32b CQEs
	IORING_SETUP_SQE_MIXED          = zcall.IORING_SETUP_SQE_MIXED // Allow both 64b and 128b SQEs
)
View Source
const (
	IORING_ENTER_GETEVENTS       = zcall.IORING_ENTER_GETEVENTS
	IORING_ENTER_SQ_WAKEUP       = zcall.IORING_ENTER_SQ_WAKEUP
	IORING_ENTER_SQ_WAIT         = zcall.IORING_ENTER_SQ_WAIT
	IORING_ENTER_EXT_ARG         = zcall.IORING_ENTER_EXT_ARG
	IORING_ENTER_REGISTERED_RING = zcall.IORING_ENTER_REGISTERED_RING
	IORING_ENTER_ABS_TIMER       = zcall.IORING_ENTER_ABS_TIMER   // Absolute timeout
	IORING_ENTER_EXT_ARG_REG     = zcall.IORING_ENTER_EXT_ARG_REG // Use registered wait region
	IORING_ENTER_NO_IOWAIT       = zcall.IORING_ENTER_NO_IOWAIT   // Skip I/O wait
)
View Source
const (
	IORING_OFF_SQ_RING    int64 = 0
	IORING_OFF_CQ_RING    int64 = 0x8000000
	IORING_OFF_SQES       int64 = 0x10000000
	IORING_OFF_PBUF_RING        = 0x80000000
	IORING_OFF_PBUF_SHIFT       = 16
	IORING_OFF_MMAP_MASK        = 0xf8000000
)
View Source
const (
	IORING_SQ_NEED_WAKEUP = 1 << iota
	IORING_SQ_CQ_OVERFLOW
	IORING_SQ_TASKRUN
)
View Source
const (
	IOSQE_FIXED_FILE       = zcall.IOSQE_FIXED_FILE
	IOSQE_IO_DRAIN         = zcall.IOSQE_IO_DRAIN
	IOSQE_IO_LINK          = zcall.IOSQE_IO_LINK
	IOSQE_IO_HARDLINK      = zcall.IOSQE_IO_HARDLINK
	IOSQE_ASYNC            = zcall.IOSQE_ASYNC
	IOSQE_BUFFER_SELECT    = zcall.IOSQE_BUFFER_SELECT
	IOSQE_CQE_SKIP_SUCCESS = zcall.IOSQE_CQE_SKIP_SUCCESS
)
View Source
const (
	IORING_POLL_ADD_MULTI = 1 << iota
	IORING_POLL_UPDATE_EVENTS
	IORING_POLL_UPDATE_USER_DATA
	IORING_POLL_ADD_LEVEL
)
View Source
const (
	IORING_ASYNC_CANCEL_ALL = 1 << iota
	IORING_ASYNC_CANCEL_FD
	IORING_ASYNC_CANCEL_ANY
	IORING_ASYNC_CANCEL_FD_FIXED
	IORING_ASYNC_CANCEL_USERDATA
	IORING_ASYNC_CANCEL_OP
)
View Source
const (
	IORING_CQE_F_BUFFER        = 1 << 0
	IORING_CQE_F_MORE          = 1 << 1
	IORING_CQE_F_SOCK_NONEMPTY = 1 << 2
	IORING_CQE_F_NOTIF         = 1 << 3
	IORING_CQE_F_BUF_MORE      = 1 << 4  // Buffer partially consumed (incremental mode)
	IORING_CQE_F_SKIP          = 1 << 5  // Skip CQE (gap filler for ring wrap)
	IORING_CQE_F_32            = 1 << 15 // 32-byte CQE in mixed mode
)
View Source
const (
	IORING_REGISTER_BUFFERS          = zcall.IORING_REGISTER_BUFFERS
	IORING_UNREGISTER_BUFFERS        = zcall.IORING_UNREGISTER_BUFFERS
	IORING_REGISTER_FILES            = zcall.IORING_REGISTER_FILES
	IORING_UNREGISTER_FILES          = zcall.IORING_UNREGISTER_FILES
	IORING_REGISTER_EVENTFD          = zcall.IORING_REGISTER_EVENTFD
	IORING_UNREGISTER_EVENTFD        = zcall.IORING_UNREGISTER_EVENTFD
	IORING_REGISTER_FILES_UPDATE     = zcall.IORING_REGISTER_FILES_UPDATE
	IORING_REGISTER_EVENTFD_ASYNC    = zcall.IORING_REGISTER_EVENTFD_ASYNC
	IORING_REGISTER_PROBE            = zcall.IORING_REGISTER_PROBE
	IORING_REGISTER_PERSONALITY      = zcall.IORING_REGISTER_PERSONALITY
	IORING_UNREGISTER_PERSONALITY    = zcall.IORING_UNREGISTER_PERSONALITY
	IORING_REGISTER_RESTRICTIONS     = zcall.IORING_REGISTER_RESTRICTIONS
	IORING_REGISTER_ENABLE_RINGS     = zcall.IORING_REGISTER_ENABLE_RINGS
	IORING_REGISTER_FILES2           = zcall.IORING_REGISTER_FILES2
	IORING_REGISTER_FILES_UPDATE2    = zcall.IORING_REGISTER_FILES_UPDATE2
	IORING_REGISTER_BUFFERS2         = zcall.IORING_REGISTER_BUFFERS2
	IORING_REGISTER_BUFFERS_UPDATE   = zcall.IORING_REGISTER_BUFFERS_UPDATE
	IORING_REGISTER_IOWQ_AFF         = zcall.IORING_REGISTER_IOWQ_AFF
	IORING_UNREGISTER_IOWQ_AFF       = zcall.IORING_UNREGISTER_IOWQ_AFF
	IORING_REGISTER_IOWQ_MAX_WORKERS = zcall.IORING_REGISTER_IOWQ_MAX_WORKERS
	IORING_REGISTER_RING_FDS         = zcall.IORING_REGISTER_RING_FDS
	IORING_UNREGISTER_RING_FDS       = zcall.IORING_UNREGISTER_RING_FDS
	IORING_REGISTER_PBUF_RING        = zcall.IORING_REGISTER_PBUF_RING
	IORING_UNREGISTER_PBUF_RING      = zcall.IORING_UNREGISTER_PBUF_RING
	IORING_REGISTER_SYNC_CANCEL      = zcall.IORING_REGISTER_SYNC_CANCEL
	IORING_REGISTER_FILE_ALLOC_RANGE = zcall.IORING_REGISTER_FILE_ALLOC_RANGE
	IORING_REGISTER_PBUF_STATUS      = zcall.IORING_REGISTER_PBUF_STATUS
	IORING_REGISTER_NAPI             = zcall.IORING_REGISTER_NAPI
	IORING_UNREGISTER_NAPI           = zcall.IORING_UNREGISTER_NAPI
	IORING_REGISTER_CLOCK            = zcall.IORING_REGISTER_CLOCK         // Register clock source
	IORING_REGISTER_CLONE_BUFFERS    = zcall.IORING_REGISTER_CLONE_BUFFERS // Clone buffers from another ring
	IORING_REGISTER_SEND_MSG_RING    = zcall.IORING_REGISTER_SEND_MSG_RING // Send MSG_RING without ring
	IORING_REGISTER_ZCRX_IFQ         = zcall.IORING_REGISTER_ZCRX_IFQ      // Register ZCRX interface queue
	IORING_REGISTER_RESIZE_RINGS     = zcall.IORING_REGISTER_RESIZE_RINGS  // Resize CQ ring
	IORING_REGISTER_MEM_REGION       = zcall.IORING_REGISTER_MEM_REGION    // Memory region setup (6.19+)
	IORING_REGISTER_QUERY            = zcall.IORING_REGISTER_QUERY         // Query ring state (6.19+)
	IORING_REGISTER_ZCRX_CTRL        = zcall.IORING_REGISTER_ZCRX_CTRL     // ZCRX control operations (6.19+)
)
View Source
const (
	FUTEX2_SIZE_U8  uint32 = 0x00 // 8-bit futex
	FUTEX2_SIZE_U16 uint32 = 0x01 // 16-bit futex
	FUTEX2_SIZE_U32 uint32 = 0x02 // 32-bit futex (most common)
	FUTEX2_SIZE_U64 uint32 = 0x03 // 64-bit futex
	FUTEX2_NUMA     uint32 = 0x04 // NUMA-aware futex
	FUTEX2_PRIVATE  uint32 = 128  // Private futex (process-local, faster)
)

Futex2 flags for FutexWait/FutexWake operations. These follow the futex2(2) interface, not the legacy futex(2) v1 flags.

View Source
const (
	// IORING_MSG_DATA sends data (result + userData) to target ring's CQ.
	IORING_MSG_DATA uint64 = 0

	// IORING_MSG_SEND_FD transfers a fixed file from source to target ring.
	IORING_MSG_SEND_FD uint64 = 1
)

MSG_RING command types for the addr field.

View Source
const (
	// IORING_MSG_RING_CQE_SKIP skips posting CQE to target ring.
	// The source ring still gets a completion.
	IORING_MSG_RING_CQE_SKIP uint32 = 1 << 0

	// IORING_MSG_RING_FLAGS_PASS passes the specified flags to target CQE.
	IORING_MSG_RING_FLAGS_PASS uint32 = 1 << 1
)

MSG_RING flags for MsgRing operations.

View Source
const (
	IOU_PBUF_RING_MMAP = 1 // Kernel allocates memory, app uses mmap
	IOU_PBUF_RING_INC  = 2 // Incremental buffer consumption mode
)

Buffer ring registration flags.

View Source
const (
	IORING_ZCRX_AREA_SHIFT = 48
	IORING_ZCRX_AREA_MASK  = ^((uint64(1) << IORING_ZCRX_AREA_SHIFT) - 1)
)

ZCRX area shift and mask for encoding area ID into offsets.

View Source
const (
	ZCRX_CTRL_FLUSH_RQ = 0 // Flush refill queue
	ZCRX_CTRL_EXPORT   = 1 // Export ZCRX state
)

ZCRX control operations.

View Source
const (
	IO_URING_QUERY_OPCODES = 0 // Query supported opcodes
	IO_URING_QUERY_ZCRX    = 1 // Query ZCRX capabilities
	IO_URING_QUERY_SCQ     = 2 // Query SQ/CQ ring info
)

Query operation types for IORING_REGISTER_QUERY.

View Source
const (
	IO_URING_NAPI_REGISTER_OP   = 0 // Register/unregister (backward compatible)
	IO_URING_NAPI_STATIC_ADD_ID = 1 // Add NAPI ID with static tracking
	IO_URING_NAPI_STATIC_DEL_ID = 2 // Delete NAPI ID with static tracking
)

NAPI operation types.

View Source
const (
	IO_URING_NAPI_TRACKING_DYNAMIC  = 0   // Dynamic tracking (default)
	IO_URING_NAPI_TRACKING_STATIC   = 1   // Static tracking
	IO_URING_NAPI_TRACKING_INACTIVE = 255 // Inactive/disabled
)

NAPI tracking strategies.

View Source
const (
	SOCKET_URING_OP_SIOCINQ      = 0 // Get input queue size
	SOCKET_URING_OP_SIOCOUTQ     = 1 // Get output queue size
	SOCKET_URING_OP_GETSOCKOPT   = 2 // Get socket option
	SOCKET_URING_OP_SETSOCKOPT   = 3 // Set socket option
	SOCKET_URING_OP_TX_TIMESTAMP = 4 // TX timestamp support
	SOCKET_URING_OP_GETSOCKNAME  = 5 // Get socket name
)

Socket uring command operations.

View Source
const (
	IORING_TIMESTAMP_HW_SHIFT   = 16                             // CQE flags bit shift for HW timestamp
	IORING_TIMESTAMP_TYPE_SHIFT = IORING_TIMESTAMP_HW_SHIFT + 1  // CQE flags bit shift for timestamp type
	IORING_CQE_F_TSTAMP_HW      = 1 << IORING_TIMESTAMP_HW_SHIFT // Hardware timestamp flag
)

Timestamp constants for SOCKET_URING_OP_TX_TIMESTAMP.

View Source
const (
	IORING_NOP_INJECT_RESULT = 1 << 0 // Inject result from sqe->result
	IORING_NOP_FILE          = 1 << 1 // NOP with file reference
	IORING_NOP_FIXED_FILE    = 1 << 2 // NOP with fixed file
	IORING_NOP_FIXED_BUFFER  = 1 << 3 // NOP with fixed buffer
	IORING_NOP_TW            = 1 << 4 // NOP via task work
	IORING_NOP_CQE32         = 1 << 5 // NOP produces 32-byte CQE
)

NOP operation flags for IORING_OP_NOP.

View Source
const (
	IORING_REGISTER_SRC_REGISTERED = 1 << 0 // Source ring is registered
	IORING_REGISTER_DST_REPLACE    = 1 << 1 // Replace destination buffers
)

Clone buffers registration flags.

View Source
const (
	IOPrioClassNone = 0
	IOPrioClassRT   = 1 // Real-time
	IOPrioClassBE   = 2 // Best-effort
	IOPrioClassIDLE = 3 // Idle
)

I/O priority class constants for WithIOPrioClass.

View Source
const (
	IORING_OP_NOP uint8 = iota
	IORING_OP_READV
	IORING_OP_WRITEV
	IORING_OP_FSYNC
	IORING_OP_READ_FIXED
	IORING_OP_WRITE_FIXED
	IORING_OP_POLL_ADD
	IORING_OP_POLL_REMOVE
	IORING_OP_SYNC_FILE_RANGE
	IORING_OP_SENDMSG
	IORING_OP_RECVMSG
	IORING_OP_TIMEOUT
	IORING_OP_TIMEOUT_REMOVE
	IORING_OP_ACCEPT
	IORING_OP_ASYNC_CANCEL
	IORING_OP_LINK_TIMEOUT
	IORING_OP_CONNECT
	IORING_OP_FALLOCATE
	IORING_OP_OPENAT
	IORING_OP_CLOSE
	IORING_OP_FILES_UPDATE
	IORING_OP_STATX
	IORING_OP_READ
	IORING_OP_WRITE
	IORING_OP_FADVISE
	IORING_OP_MADVISE
	IORING_OP_SEND
	IORING_OP_RECV
	IORING_OP_OPENAT2
	IORING_OP_EPOLL_CTL
	IORING_OP_SPLICE
	IORING_OP_PROVIDE_BUFFERS
	IORING_OP_REMOVE_BUFFERS
	IORING_OP_TEE
	IORING_OP_SHUTDOWN
	IORING_OP_RENAMEAT
	IORING_OP_UNLINKAT
	IORING_OP_MKDIRAT
	IORING_OP_SYMLINKAT
	IORING_OP_LINKAT
	IORING_OP_MSG_RING
	IORING_OP_FSETXATTR
	IORING_OP_SETXATTR
	IORING_OP_FGETXATTR
	IORING_OP_GETXATTR
	IORING_OP_SOCKET
	IORING_OP_URING_CMD
	IORING_OP_SEND_ZC
	IORING_OP_SENDMSG_ZC
	IORING_OP_READ_MULTISHOT
	IORING_OP_WAITID
	IORING_OP_FUTEX_WAIT
	IORING_OP_FUTEX_WAKE
	IORING_OP_FUTEX_WAITV
	IORING_OP_FIXED_FD_INSTALL
	IORING_OP_FTRUNCATE
	IORING_OP_BIND
	IORING_OP_LISTEN
	IORING_OP_RECV_ZC      // Zero-copy receive
	IORING_OP_EPOLL_WAIT   // Epoll wait
	IORING_OP_READV_FIXED  // Vectored read with fixed buffers
	IORING_OP_WRITEV_FIXED // Vectored write with fixed buffers
	IORING_OP_PIPE         // Create pipe
	IORING_OP_NOP128       // 128-byte NOP opcode
	IORING_OP_URING_CMD128 // 128-byte uring command opcode
)

IORING_OP_* values encode io_uring operation types in SQEs and SQEContext.

View Source
const (
	IORING_TIMEOUT_ABS = 1 << iota
	IORING_TIMEOUT_UPDATE
	IORING_TIMEOUT_BOOTTIME
	IORING_TIMEOUT_REALTIME
	IORING_LINK_TIMEOUT_UPDATE
	IORING_TIMEOUT_ETIME_SUCCESS
	IORING_TIMEOUT_MULTISHOT
	IORING_TIMEOUT_CLOCK_MASK  = IORING_TIMEOUT_BOOTTIME | IORING_TIMEOUT_REALTIME
	IORING_TIMEOUT_UPDATE_MASK = IORING_TIMEOUT_UPDATE | IORING_LINK_TIMEOUT_UPDATE
)

Timeout operation flags.

View Source
const (
	IORING_ACCEPT_MULTISHOT  = 1 << 0 // Multi-shot accept: one SQE, multiple completions
	IORING_ACCEPT_DONTWAIT   = 1 << 1 // Non-blocking accept
	IORING_ACCEPT_POLL_FIRST = 1 << 2 // Poll for connection before accepting
)

Accept operation flags.

View Source
const (
	IORING_RECVSEND_POLL_FIRST  = 1 << iota // Poll before send/recv
	IORING_RECV_MULTISHOT                   // Multi-shot receive
	IORING_RECVSEND_FIXED_BUF               // Use registered buffer
	IORING_SEND_ZC_REPORT_USAGE             // Report zero-copy usage
	IORING_RECVSEND_BUNDLE                  // Bundle mode
	IORING_SEND_VECTORIZED                  // Vectorized send
)

Send/receive operation flags.

View Source
const (
	DefaultBufferNumPico   = 1 << 15 // 32768 × 32 B = 1 MiB
	DefaultBufferNumNano   = 1 << 14 // 16384 × 128 B = 2 MiB
	DefaultBufferNumMicro  = 1 << 13 // 8192 × 512 B = 4 MiB
	DefaultBufferNumSmall  = 1 << 12 // 4096 × 2 KiB = 8 MiB
	DefaultBufferNumMedium = 1 << 11 // 2048 × 8 KiB = 16 MiB
	DefaultBufferNumBig    = 1 << 10 // 1024 × 32 KiB = 32 MiB
	DefaultBufferNumLarge  = 1 << 9  // 512 × 128 KiB = 64 MiB
	DefaultBufferNumGreat  = 1 << 8  // 256 × 512 KiB = 128 MiB
	DefaultBufferNumHuge   = 1 << 7  // 128 × 2 MiB = 256 MiB
	DefaultBufferNumVast   = 1 << 6  // 64 × 8 MiB = 512 MiB
	DefaultBufferNumGiant  = 1 << 5  // 32 × 32 MiB = 1 GiB
	DefaultBufferNumTitan  = 1 << 4  // 16 × 128 MiB = 2 GiB
)

Default buffer counts per tier. Smaller buffers have more instances to handle high-frequency small I/O. Larger buffers have fewer instances due to memory constraints.

View Source
const (
	NetworkUnix = sock.NetworkUnix
	NetworkIPv4 = sock.NetworkIPv4
	NetworkIPv6 = sock.NetworkIPv6
)

Network family aliases.

View Source
const (
	SizeofSockaddrAny   = sock.SizeofSockaddrAny
	SizeofSockaddrInet4 = sock.SizeofSockaddrInet4
	SizeofSockaddrInet6 = sock.SizeofSockaddrInet6
	SizeofSockaddrUnix  = sock.SizeofSockaddrUnix
)

Raw socket address size constants.

View Source
const (
	AF_UNIX  = sock.AF_UNIX
	AF_LOCAL = sock.AF_LOCAL
	AF_INET  = sock.AF_INET
	AF_INET6 = sock.AF_INET6

	SOCK_STREAM    = sock.SOCK_STREAM
	SOCK_DGRAM     = sock.SOCK_DGRAM
	SOCK_RAW       = sock.SOCK_RAW
	SOCK_SEQPACKET = sock.SOCK_SEQPACKET
	SOCK_NONBLOCK  = sock.SOCK_NONBLOCK
	SOCK_CLOEXEC   = sock.SOCK_CLOEXEC

	IPPROTO_IP   = sock.IPPROTO_IP
	IPPROTO_RAW  = sock.IPPROTO_RAW
	IPPROTO_TCP  = sock.IPPROTO_TCP
	IPPROTO_UDP  = sock.IPPROTO_UDP
	IPPROTO_IPV6 = sock.IPPROTO_IPV6
	IPPROTO_SCTP = sock.IPPROTO_SCTP

	MSG_WAITALL  = sock.MSG_WAITALL
	MSG_ZEROCOPY = sock.MSG_ZEROCOPY

	SHUT_RD   = sock.SHUT_RD
	SHUT_WR   = sock.SHUT_WR
	SHUT_RDWR = sock.SHUT_RDWR

	PROT_READ  = zcall.PROT_READ
	PROT_WRITE = zcall.PROT_WRITE

	MAP_SHARED   = zcall.MAP_SHARED
	MAP_POPULATE = zcall.MAP_POPULATE
)

Socket, protocol, message, shutdown, and memory-mapping aliases.

View Source
const (
	EPOLL_CTL_ADD = 1
	EPOLL_CTL_DEL = 2
	EPOLL_CTL_MOD = 3

	EPOLLIN  = 0x1
	EPOLLOUT = 0x4
	EPOLLET  = 0x80000000
)

Epoll constants.

View Source
const AT_FDCWD = -100

AT_FDCWD is the special value for current working directory.

View Source
const FUTEX_BITSET_MATCH_ANY uint64 = 0xFFFFFFFF

FUTEX_BITSET_MATCH_ANY matches any waker when used as mask in FutexWait.

View Source
const (
	IORING_CQE_BUFFER_SHIFT = 16
)
View Source
const IORING_FILE_INDEX_ALLOC uint32 = 0xFFFFFFFF

IORING_FILE_INDEX_ALLOC is passed as file_index to have io_uring allocate a free direct descriptor slot. The allocated index is returned in cqe->res. Returns -ENFILE if no free slots available.

View Source
const IORING_FIXED_FD_NO_CLOEXEC uint32 = 1 << 0

IORING_FIXED_FD_NO_CLOEXEC omits O_CLOEXEC when installing a fixed fd. By default, FixedFdInstall sets O_CLOEXEC on the new regular fd.

View Source
const (
	IORING_MEM_REGION_REG_WAIT_ARG = 1 // Expose region as registered wait arguments
)

Memory region registration flags.

View Source
const (
	IORING_MEM_REGION_TYPE_USER = 1 // User-provided memory
)

Memory region types.

View Source
const (
	IORING_NOTIF_USAGE_ZC_COPIED = 1 << 31 // Data was copied instead of zero-copy
)

Notification CQE usage flags for zero-copy operations.

View Source
const IORING_REGISTER_USE_REGISTERED_RING = zcall.IORING_REGISTER_USE_REGISTERED_RING

IORING_REGISTER_USE_REGISTERED_RING is a flag that can be OR'd with register opcodes to use a registered ring fd instead of a regular fd.

View Source
const (
	IORING_REG_WAIT_TS = 1 << 0 // Timestamp in wait region
)

Registered wait flags.

View Source
const (
	IORING_RSRC_REGISTER_SPARSE = 1 << 0 // Sparse registration
)

Resource registration flags.

View Source
const (
	IORING_RW_ATTR_FLAG_PI = 1 << 0 // PI (Protection Information) attribute
)

RW attribute flags for sqe->attr_type_mask.

View Source
const (
	IORING_ZCRX_AREA_DMABUF = 1 // Use DMA buffer
)

ZCRX area registration flags.

View Source
const (
	IO_URING_OP_SUPPORTED = 1 << 0
)
View Source
const IPPROTO_UDPLITE = sock.IPPROTO_UDPLITE

IPPROTO_UDPLITE is UDP-Lite protocol number.

View Source
const O_LARGEFILE = 0x8000

O_LARGEFILE flag for openat.

View Source
const SizeofOpenHow = 24

SizeofOpenHow is the size of OpenHow structure.

View Source
const (
	ZCRX_REG_IMPORT = 1 // Import mode
)

ZCRX registration flags.

Variables

View Source
var (
	ErrInvalidParam = iofd.ErrInvalidParam
	ErrInterrupted  = iofd.ErrInterrupted
	ErrNoMemory     = iofd.ErrNoMemory
	ErrPermission   = iofd.ErrPermission
)

Common errors reused from iofd for semantic consistency across the ecosystem.

View Source
var (
	// ErrInProgress indicates the operation is in progress.
	ErrInProgress = errors.New("uring: operation in progress")

	// ErrFaultParams indicates a fault in parameters (bad address).
	ErrFaultParams = errors.New("uring: fault in parameters")

	// ErrProcessFileLimit indicates the process file descriptor limit was reached.
	ErrProcessFileLimit = errors.New("uring: process file descriptor limit")

	// ErrSystemFileLimit indicates the system file descriptor limit was reached.
	ErrSystemFileLimit = errors.New("uring: system file descriptor limit")

	// ErrNoDevice indicates no such device.
	ErrNoDevice = errors.New("uring: no such device")

	// ErrNotSupported indicates the operation is not supported.
	ErrNotSupported = errors.New("uring: operation not supported")

	// ErrBusy indicates the resource is busy.
	ErrBusy = errors.New("uring: resource busy")

	// ErrClosed indicates the ring has already been stopped.
	ErrClosed = errors.New("uring: ring closed")

	// ErrCQOverflow indicates the CQ overflow condition was observed while the CQ appeared empty.
	ErrCQOverflow = errors.New("uring: completion queue overflow")

	// ErrExists indicates the resource already exists.
	ErrExists = errors.New("uring: already exists")

	// ErrNameTooLong indicates a pathname exceeds the kernel limit.
	ErrNameTooLong = errors.New("uring: name too long")

	// ErrNotFound indicates the resource was not found.
	ErrNotFound = errors.New("uring: not found")

	// ErrCanceled indicates the operation was canceled.
	ErrCanceled = errors.New("uring: operation canceled")

	// ErrTimedOut indicates the operation timed out.
	ErrTimedOut = errors.New("uring: operation timed out")

	// ErrConnectionRefused indicates the connection was refused.
	ErrConnectionRefused = errors.New("uring: connection refused")

	// ErrConnectionReset indicates the connection was reset.
	ErrConnectionReset = errors.New("uring: connection reset")

	// ErrNotConnected indicates the socket is not connected.
	ErrNotConnected = errors.New("uring: not connected")

	// ErrAlreadyConnected indicates the socket is already connected.
	ErrAlreadyConnected = errors.New("uring: already connected")

	// ErrAddressInUse indicates the address is already in use.
	ErrAddressInUse = errors.New("uring: address in use")

	// ErrNetworkUnreachable indicates the network is unreachable.
	ErrNetworkUnreachable = errors.New("uring: network unreachable")

	// ErrHostUnreachable indicates the host is unreachable.
	ErrHostUnreachable = errors.New("uring: host unreachable")

	// ErrBrokenPipe indicates the pipe is broken (EPIPE).
	ErrBrokenPipe = errors.New("uring: broken pipe")

	// ErrNoBufferSpace indicates no buffer space available (ENOBUFS).
	ErrNoBufferSpace = errors.New("uring: no buffer space available")
)

Error definitions for uring operations.

View Source
var ErrNotReady = errors.New("uring: listener not ready")

ErrNotReady indicates the listener is not yet ready for accept.

Functions

func AlignedMemBlock

func AlignedMemBlock() []byte

AlignedMemBlock returns a page-aligned memory block.

func CastUserData

func CastUserData[T any](ext *ExtSQE) *T

CastUserData casts `ExtSQE.UserData` to `*T`. The returned pointer is borrowed from `ext` and is valid only until release. `T` must fit within `ExtSQE.UserData`.

`ExtSQE.UserData` is raw caller-beware storage. Prefer scalar payloads here; if a raw overlay stores Go pointers, interfaces, func values, maps, slices, strings, chans, or structs containing them in these bytes, caller code must keep the live roots outside `UserData`.

func CompletionError

func CompletionError(res int32) error

CompletionError decodes a CQE result into the package error model. Non-negative results are successful byte counts or operation values. Negative results are kernel -errno values.

func ContextUserData

func ContextUserData[T any](ctx context.Context) T

ContextUserData extracts a typed value from context. Returns the zero value of T if not found.

func ContextWithUserData

func ContextWithUserData[T any](ctx context.Context, val T) context.Context

ContextWithUserData returns a new context with the typed value stored.

func PrepareListenerBind

func PrepareListenerBind(ext *ExtSQE, fd iofd.FD)

PrepareListenerBind fills ext's SQE for IORING_OP_BIND using the sockaddr stored from PrepareListenerSocket. fd is the socket from SOCKET completion.

func PrepareListenerListen

func PrepareListenerListen(ext *ExtSQE, fd iofd.FD)

PrepareListenerListen fills ext's SQE for IORING_OP_LISTEN. fd is the bound socket, backlog from the stored context.

func PrepareListenerSocket

func PrepareListenerSocket(ext *ExtSQE, domain, sockType, proto int, sa Sockaddr, backlog int, handler ListenerHandler) error

PrepareListenerSocket fills ext's SQE for IORING_OP_SOCKET and stores the sockaddr + backlog for subsequent stages. Small sockaddrs stay inline in ext.UserData; oversized ones stay anchored in the pooled sidecar. After calling this, submit with ring.SubmitExtended(PackExtended(ext)).

ext must be a pool-borrowed slot obtained from ContextPools.Extended. Passing a non-pooled ExtSQE is undefined behavior: the sidecar anchors live past the end of a standalone ExtSQE object.

A nil handler is normalized to NoopListenerHandler.

func SetListenerReady

func SetListenerReady(ext *ExtSQE)

SetListenerReady marks the listener context as ready. Call after LISTEN completes successfully.

Types

type Addr

type Addr = sock.Addr

Addr is the network address interface used by connect and bind helpers.

type AttrPI

type AttrPI struct {
	Flags  uint16 // PI flags
	AppTag uint16 // Application tag
	Len    uint32 // Length
	Addr   uint64 // Address
	Seed   uint64 // Seed value
	// contains filtered or unexported fields
}

AttrPI is the PI attribute information for read/write operations. Matches struct io_uring_attr_pi in Linux.

type BigBuffer

type BigBuffer = iobuf.BigBuffer

Buffer types re-exported from iobuf.

type BufferGroupsConfig

type BufferGroupsConfig struct {
	PicoNum   int // 32 B buffers
	NanoNum   int // 128 B buffers
	MicroNum  int // 512 B buffers
	SmallNum  int // 2 KiB buffers
	MediumNum int // 8 KiB buffers
	BigNum    int // 32 KiB buffers
	LargeNum  int // 128 KiB buffers
	GreatNum  int // 512 KiB buffers
	HugeNum   int // 2 MiB buffers
	VastNum   int // 8 MiB buffers
	GiantNum  int // 32 MiB buffers
	TitanNum  int // 128 MiB buffers
}

BufferGroupsConfig configures buffer counts for each tier.

Each field specifies the number of buffers to allocate for that tier. A zero count disables the tier (no memory allocated).

Memory usage calculation:

Total = Sum(TierSize × TierCount × Scale)

Example with the default config (Scale=1):

Pico:   32768 × 32 B   = 1 MiB
Nano:   16384 × 128 B  = 2 MiB
Micro:  8192 × 512 B   = 4 MiB
Small:  4096 × 2 KiB   = 8 MiB
Medium: 2048 × 8 KiB   = 16 MiB
Big:    1024 × 32 KiB  = 32 MiB
Large:  512 × 128 KiB  = 64 MiB
                Total  ≈ 127 MiB per scale

func BufferConfigForBudget

func BufferConfigForBudget(budget int) (BufferGroupsConfig, int)

BufferConfigForBudget returns the BufferGroupsConfig and scale selected for the given memory budget. Use this when you need to inspect or model the largest tier set that fits in the remaining buffer-group memory.

Budget handling matches OptionsForBudget's memory reservation:

  • registered buffers use 25% of the budget (minimum 8 MiB)
  • ring overhead is reserved from the same budget
  • buffer groups use the remaining memory
  • if the remaining memory cannot fit the minimal tier set, the returned config is empty and scale is 0

OptionsForBudget does not carry BufferGroupsConfig through Options. It sets Options.MultiSizeBuffer from the default buffer-group shape that New actually allocates, so the scale returned here can differ from OptionsForBudget for budgets where a smaller tier set fits but the default tier set does not.

Example:

cfg, scale := BufferConfigForBudget(256 * MiB)
// cfg contains tier configuration, scale is the multiplier

func DefaultBufferGroupsConfig

func DefaultBufferGroupsConfig() BufferGroupsConfig

DefaultBufferGroupsConfig returns the default configuration. It enables the first 7 tiers (Pico through Large), totaling ~127 MiB per scale.

func FullBufferGroupsConfig

func FullBufferGroupsConfig() BufferGroupsConfig

FullBufferGroupsConfig returns configuration with all 12 tiers enabled. It uses ~4 GiB per scale. Use it only on high-memory systems.

func MinimalBufferGroupsConfig

func MinimalBufferGroupsConfig() BufferGroupsConfig

MinimalBufferGroupsConfig returns a reduced configuration. It enables the first 5 tiers (Pico through Medium), totaling ~31 MiB per scale.

type BundleIterator

type BundleIterator struct {
	// contains filtered or unexported fields
}

BundleIterator iterates over buffers consumed in a bundle receive operation. Bundle receives allow receiving multiple buffers in a single syscall, with data spanning the logical sequence of buffer IDs starting at the CQE's first ID.

The iterator handles buffer ring wrap-around using the ring mask.

func NewBundleIterator

func NewBundleIterator(cqe CQEView, bufBacking []byte, bufSize int, ringEntries int) (BundleIterator, bool)

NewBundleIterator creates an iterator for the buffers consumed by a bundle CQE.

Parameters:

  • cqe: the CQE from a bundle receive operation
  • bufBacking: backing memory for the full ring, such as the slice returned by AlignedMem
  • bufSize: size of each buffer in the ring
  • ringEntries: number of entries in the buffer ring; must be a power of two

bufBacking must remain alive for the iterator's lifetime and must cover at least bufSize*ringEntries bytes.

Returns a zero BundleIterator and false if the CQE indicates no data was received or if the constructor arguments are invalid.

func (BundleIterator) All

func (it BundleIterator) All() iter.Seq[[]byte]

All returns an iterator function for use with Go 1.23+ range-over-func. Each iteration yields one buffer from the bundle.

Usage:

for buf := range iter.All() {
    process(buf)
}

func (BundleIterator) AllWithSlotID

func (it BundleIterator) AllWithSlotID() iter.Seq2[uint16, []byte]

AllWithSlotID returns an iterator that yields both buffer data and masked ring slot ID. Useful when you need to track which ring slots were consumed.

Usage:

for id, buf := range iter.AllWithSlotID() {
    fmt.Printf("Slot ID %d: %d bytes\n", id, len(buf))
}

func (BundleIterator) Buffer

func (it BundleIterator) Buffer(index int) []byte

Buffer returns the buffer at the given index without advancing the iterator. Index must be in range [0, Count()). The last buffer may be partial.

func (BundleIterator) Collect

func (it BundleIterator) Collect() [][]byte

Collect returns all buffers as a slice. This allocates a new slice; for zero-allocation iteration, use All().

func (BundleIterator) CopyTo

func (it BundleIterator) CopyTo(dst []byte) int

CopyTo copies all bundle data to the destination slice. Returns the number of bytes copied.

func (BundleIterator) Count

func (it BundleIterator) Count() int

Count returns the number of buffers consumed in this bundle.

func (BundleIterator) Recycle

func (it BundleIterator) Recycle(ur *Uring)

Recycle returns all consumed buffers to the buffer ring via provide and commits them with advance. This MUST be called after the bundle data has been fully processed to prevent buffer ring entry leaks.

Recycle is single-threaded: do not call it concurrently with another Recycle on the same Uring, or with any other path that can race with buffer ring provide/advance.

The group info (gidOffset, group) is captured at construction time.

func (BundleIterator) SlotID

func (it BundleIterator) SlotID(index int) uint16

SlotID returns the masked ring slot ID at the given index in the bundle. Handles ring wrap-around automatically. Index must be in range [0, Count()).

func (BundleIterator) TotalBytes

func (it BundleIterator) TotalBytes() int

TotalBytes returns the total bytes received in this bundle.

type CQEView

type CQEView struct {
	Res   int32  // Completion result (directly accessible)
	Flags uint32 // CQE flags (directly accessible)
	// contains filtered or unexported fields
}

CQEView provides a view into a completion queue entry. It exposes kernel completion facts directly and lets caller-side runtime code decide how to route or interpret them. When available, it also exposes the submission context that produced those facts. A copied CQEView is a completion observation, not durable route state. If caller code stores it beyond the current dispatch turn, caller code must keep its own route state.

Property Patterns

| FullSQE() | Extended() | Mode | Available Data | |-----------|------------|----------|-----------------------------------------------| | false | false | Direct | Op, SQE flags, BufGroup, FD, Res, CQE flags | | true | false | Indirect | + full ioUringSqe copy | | true | true | Extended | + borrowed `ExtSQE` escape hatch |

Usage

n, err := ring.Wait(cqes)
for i := range n {
    cqe := cqes[i]
    // Observe the kernel facts first.
    if err := cqe.Err(); err != nil {
        return fmt.Errorf("completion failed: op=%d fd=%d: %w", cqe.Op(), cqe.FD(), err)
    }
    fmt.Printf("completed op=%d on fd=%d with res=%d\n", cqe.Op(), cqe.FD(), cqe.Res)
    if cqe.HasMore() {
        // Caller-side runtime code decides whether to keep routing this live stream.
    }
    if cqe.FullSQE() {
        // Indirect and Extended modes also expose the submitted SQE.
        fmt.Printf("submitted opcode=%d\n", cqe.SQE().Opcode())
    }
}

func (*CQEView) BufGroup

func (c *CQEView) BufGroup() uint16

BufGroup returns the observed submission buffer group index. It is non-zero only when buffer selection was part of the submission.

func (*CQEView) BufID

func (c *CQEView) BufID() uint16

BufID returns the buffer ID from CQE flags. Only valid when IORING_CQE_F_BUFFER flag is set.

func (*CQEView) BundleBuffers

func (c *CQEView) BundleBuffers(bufferSize int) (startID uint16, count int)

BundleBuffers returns the logical range of buffer IDs consumed. The returned startID is the first buffer ID; count is the number of buffers. The range [startID, startID+count) is logical and may wrap around the ring. Callers must apply (id & ringMask) to obtain physical buffer IDs, or use BundleIterator which handles wrap-around automatically.

func (*CQEView) BundleCount

func (c *CQEView) BundleCount(bufferSize int) int

BundleCount returns the number of buffers consumed in a bundle operation. For receive bundles, this is derived from the result (bytes received) divided by the buffer size. For accurate count, use with known buffer sizes.

func (*CQEView) BundleStartID

func (c *CQEView) BundleStartID() uint16

BundleStartID returns the starting buffer ID for a bundle operation. For bundle receives, buffers are consumed contiguously from this ID. Only valid when IORING_CQE_F_BUFFER flag is set.

func (*CQEView) Context

func (c *CQEView) Context() SQEContext

Context returns the underlying SQEContext. Use this for advanced mode-specific inspection beyond the CQEView helpers.

func (*CQEView) Err

func (c *CQEView) Err() error

Err decodes c.Res as a completion error.

func (*CQEView) ExtSQE

func (c *CQEView) ExtSQE() *ExtSQE

ExtSQE returns the borrowed ExtSQE backing Extended mode contexts. Caller should check Extended() first.

func (*CQEView) Extended

func (c *CQEView) Extended() bool

Extended reports whether extended user data is available. Returns true only for Extended mode.

func (*CQEView) FD

func (c *CQEView) FD() iofd.FD

FD returns the file descriptor associated with the operation. Always available.

func (*CQEView) FullSQE

func (c *CQEView) FullSQE() bool

FullSQE reports whether full SQE information is available. Returns true for Indirect and Extended modes.

func (*CQEView) HasBuffer

func (c *CQEView) HasBuffer() bool

HasBuffer reports whether a buffer ID is available in the flags.

func (*CQEView) HasBufferMore

func (c *CQEView) HasBufferMore() bool

HasBufferMore reports whether the buffer was partially consumed (incremental mode). When set, the same buffer ID remains valid for additional data.

func (*CQEView) HasMore

func (c *CQEView) HasMore() bool

HasMore reports whether more completions are coming (multishot).

func (*CQEView) IsNotification

func (c *CQEView) IsNotification() bool

IsNotification reports whether this is a zero-copy notification CQE. Zero-copy sends generate two CQEs: one for completion, one for notification when the buffer can be reused.

func (*CQEView) Op

func (c *CQEView) Op() uint8

Op returns the IORING_OP_* opcode. Always available (extracted from Direct mode context or from SQE in other modes).

func (*CQEView) SQE

func (c *CQEView) SQE() SQEView

SQE returns a view of the submitted SQE when the context retains one. Caller should check FullSQE() first; Direct mode returns an invalid view because it keeps only compact completion-context facts.

func (*CQEView) SocketNonEmpty

func (c *CQEView) SocketNonEmpty() bool

SocketNonEmpty reports whether the socket has more data available. This is set when a short read/recv occurred but more data remains.

type CloneBuffers

type CloneBuffers struct {
	SrcFD  uint32 // Source ring file descriptor
	Flags  uint32 // IORING_REGISTER_SRC_* flags
	SrcOff uint32 // Source buffer offset
	DstOff uint32 // Destination buffer offset
	Nr     uint32 // Number of buffers to clone
	// contains filtered or unexported fields
}

CloneBuffers describes a buffer clone operation. Matches struct io_uring_clone_buffers in Linux.

type ContextPools

type ContextPools struct {
	// contains filtered or unexported fields
}

ContextPools holds pooled IndirectSQE and ExtSQE contexts. IndirectSQE slots use explicit aligned backing; extended slots pair each ExtSQE with adjacent GC-visible sidecar anchors.

func NewContextPools

func NewContextPools(capacity int) *ContextPools

NewContextPools creates pooled IndirectSQE and ExtSQE contexts with the given per-pool capacity. New pools are ready for immediate use.

func (*ContextPools) Capacity

func (p *ContextPools) Capacity() int

Capacity returns the per-pool slot count.

func (*ContextPools) Extended

func (p *ContextPools) Extended() *ExtSQE

Extended borrows an ExtSQE from the pool. Returns nil if exhausted.

func (*ContextPools) ExtendedAvailable

func (p *ContextPools) ExtendedAvailable() int

ExtendedAvailable returns the number of ExtSQE slots available.

func (*ContextPools) Indirect

func (p *ContextPools) Indirect() *IndirectSQE

Indirect borrows an IndirectSQE from the pool. Returns nil if exhausted.

func (*ContextPools) IndirectAvailable

func (p *ContextPools) IndirectAvailable() int

IndirectAvailable returns the number of IndirectSQE slots available.

func (*ContextPools) PutExtended

func (p *ContextPools) PutExtended(ext *ExtSQE)

PutExtended returns an ExtSQE to the pool and clears its sidecar anchors.

func (*ContextPools) PutIndirect

func (p *ContextPools) PutIndirect(indirect *IndirectSQE)

PutIndirect returns an IndirectSQE to the pool.

func (*ContextPools) Reset

func (p *ContextPools) Reset()

Reset scrubs pooled slot state and reinitializes both pool queues, making all slots available again.

type Ctx0

type Ctx0 struct {
	Fn   Handler  // 8 bytes
	Data [56]byte // 56 bytes
}

Ctx0 has 0 refs, 0 vals, and 56 bytes of data.

func Ctx0Of

func Ctx0Of(sqe *ExtSQE) *Ctx0

Ctx0Of is a shorthand for ViewCtx(sqe).Vals0(). Use when you need just a handler with max data space (56B).

func CtxOf

func CtxOf(sqe *ExtSQE) *Ctx0

CtxOf is a shorthand for ViewCtx(sqe).Vals0().

type Ctx0V1

type Ctx0V1 struct {
	Fn   Handler  // 8 bytes
	Val1 int64    // 8 bytes
	Data [48]byte // 48 bytes
}

Ctx0V1 has 0 refs, 1 val, and 48 bytes of data.

func Ctx0V1Of

func Ctx0V1Of(sqe *ExtSQE) *Ctx0V1

Ctx0V1Of is a shorthand for ViewCtx(sqe).Vals1(). Use when you need 0 refs and 1 val (e.g., handler + timestamp).

func CtxV1Of

func CtxV1Of(sqe *ExtSQE) *Ctx0V1

CtxV1Of is a shorthand for ViewCtx(sqe).Vals1().

type Ctx0V2

type Ctx0V2 struct {
	Fn   Handler  // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Data [40]byte // 40 bytes
}

Ctx0V2 has 0 refs, 2 vals, and 40 bytes of data.

func Ctx0V2Of

func Ctx0V2Of(sqe *ExtSQE) *Ctx0V2

Ctx0V2Of is a shorthand for ViewCtx(sqe).Vals2(). Use when you need 0 refs and 2 vals (e.g., handler + offset + length).

func CtxV2Of

func CtxV2Of(sqe *ExtSQE) *Ctx0V2

CtxV2Of is a shorthand for ViewCtx(sqe).Vals2().

type Ctx0V3

type Ctx0V3 struct {
	Fn   Handler  // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Data [32]byte // 32 bytes
}

Ctx0V3 has 0 refs, 3 vals, and 32 bytes of data.

func Ctx0V3Of

func Ctx0V3Of(sqe *ExtSQE) *Ctx0V3

Ctx0V3Of is a shorthand for ViewCtx(sqe).Vals3(). Use when you need 0 refs and 3 vals.

func CtxV3Of

func CtxV3Of(sqe *ExtSQE) *Ctx0V3

CtxV3Of is a shorthand for ViewCtx(sqe).Vals3().

type Ctx0V4

type Ctx0V4 struct {
	Fn   Handler  // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Val4 int64    // 8 bytes
	Data [24]byte // 24 bytes
}

Ctx0V4 has 0 refs, 4 vals, and 24 bytes of data.

func Ctx0V4Of

func Ctx0V4Of(sqe *ExtSQE) *Ctx0V4

Ctx0V4Of is a shorthand for ViewCtx(sqe).Vals4(). Use when you need 0 refs and 4 vals.

func CtxV4Of

func CtxV4Of(sqe *ExtSQE) *Ctx0V4

CtxV4Of is a shorthand for ViewCtx(sqe).Vals4().

type Ctx0V5

type Ctx0V5 struct {
	Fn   Handler  // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Val4 int64    // 8 bytes
	Val5 int64    // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx0V5 has 0 refs, 5 vals, and 16 bytes of data.

func Ctx0V5Of

func Ctx0V5Of(sqe *ExtSQE) *Ctx0V5

Ctx0V5Of is a shorthand for ViewCtx(sqe).Vals5(). Use when you need 0 refs and 5 vals.

func CtxV5Of

func CtxV5Of(sqe *ExtSQE) *Ctx0V5

CtxV5Of is a shorthand for ViewCtx(sqe).Vals5().

type Ctx0V6

type Ctx0V6 struct {
	Fn   Handler // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Val5 int64   // 8 bytes
	Val6 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx0V6 has 0 refs, 6 vals, and 8 bytes of data.

func Ctx0V6Of

func Ctx0V6Of(sqe *ExtSQE) *Ctx0V6

Ctx0V6Of is a shorthand for ViewCtx(sqe).Vals6(). Use when you need 0 refs and 6 vals.

func CtxV6Of

func CtxV6Of(sqe *ExtSQE) *Ctx0V6

CtxV6Of is a shorthand for ViewCtx(sqe).Vals6().

type Ctx0V7

type Ctx0V7 struct {
	Fn   Handler // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Val5 int64   // 8 bytes
	Val6 int64   // 8 bytes
	Val7 int64   // 8 bytes
}

Ctx0V7 has 0 refs, 7 vals, and 0 bytes of data.

func Ctx0V7Of

func Ctx0V7Of(sqe *ExtSQE) *Ctx0V7

Ctx0V7Of is a shorthand for ViewCtx(sqe).Vals7(). Use when you need 0 refs and 7 vals.

func CtxV7Of

func CtxV7Of(sqe *ExtSQE) *Ctx0V7

CtxV7Of is a shorthand for ViewCtx(sqe).Vals7().

type Ctx1

type Ctx1[T1 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Data [48]byte // 48 bytes
}

Ctx1 has 1 ref, 0 vals, 48 bytes data.

func Ctx1Of

func Ctx1Of[T any](sqe *ExtSQE) *Ctx1[T]

Ctx1Of is a shorthand for ViewCtx1[T](sqe).Vals0(). Use when you need 1 ref and 0 vals (e.g., handler + connection ref).

type Ctx1V1

type Ctx1V1[T1 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Val1 int64    // 8 bytes
	Data [40]byte // 40 bytes
}

Ctx1V1 has 1 ref, 1 val, 40 bytes data.

func Ctx1V1Of

func Ctx1V1Of[T any](sqe *ExtSQE) *Ctx1V1[T]

Ctx1V1Of is a shorthand for ViewCtx1[T](sqe).Vals1() - the most common case. Use when you need 1 ref and 1 val (e.g., connection + timestamp).

type Ctx1V2

type Ctx1V2[T1 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Data [32]byte // 32 bytes
}

Ctx1V2 has 1 ref, 2 vals, 32 bytes data.

func Ctx1V2Of

func Ctx1V2Of[T any](sqe *ExtSQE) *Ctx1V2[T]

Ctx1V2Of is a shorthand for ViewCtx1[T](sqe).Vals2(). Use when you need 1 ref and 2 vals (e.g., connection + offset + length).

type Ctx1V3

type Ctx1V3[T1 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Data [24]byte // 24 bytes
}

Ctx1V3 has 1 ref, 3 vals, 24 bytes data.

func Ctx1V3Of

func Ctx1V3Of[T any](sqe *ExtSQE) *Ctx1V3[T]

Ctx1V3Of is a shorthand for ViewCtx1[T](sqe).Vals3(). Use when you need 1 ref and 3 vals.

type Ctx1V4

type Ctx1V4[T1 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Val4 int64    // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx1V4 has 1 ref, 4 vals, 16 bytes data.

func Ctx1V4Of

func Ctx1V4Of[T any](sqe *ExtSQE) *Ctx1V4[T]

Ctx1V4Of is a shorthand for ViewCtx1[T](sqe).Vals4(). Use when you need 1 ref and 4 vals.

type Ctx1V5

type Ctx1V5[T1 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Val5 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx1V5 has 1 ref, 5 vals, 8 bytes data.

func Ctx1V5Of

func Ctx1V5Of[T any](sqe *ExtSQE) *Ctx1V5[T]

Ctx1V5Of is a shorthand for ViewCtx1[T](sqe).Vals5(). Use when you need 1 ref and 5 vals.

type Ctx1V6

type Ctx1V6[T1 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Val5 int64   // 8 bytes
	Val6 int64   // 8 bytes
}

Ctx1V6 has 1 ref, 6 vals, 0 bytes data.

func Ctx1V6Of

func Ctx1V6Of[T any](sqe *ExtSQE) *Ctx1V6[T]

Ctx1V6Of is a shorthand for ViewCtx1[T](sqe).Vals6(). Use when you need 1 ref and 6 vals.

type Ctx2

type Ctx2[T1, T2 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Data [40]byte // 40 bytes
}

Ctx2 has 2 refs, 0 vals, 40 bytes data.

func Ctx2Of

func Ctx2Of[T1, T2 any](sqe *ExtSQE) *Ctx2[T1, T2]

Ctx2Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals0(). Use when you need 2 refs and 0 vals (e.g., conn + buffer).

type Ctx2V1

type Ctx2V1[T1, T2 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Val1 int64    // 8 bytes
	Data [32]byte // 32 bytes
}

Ctx2V1 has 2 refs, 1 val, 32 bytes data.

func Ctx2V1Of

func Ctx2V1Of[T1, T2 any](sqe *ExtSQE) *Ctx2V1[T1, T2]

Ctx2V1Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals1(). Use when you need 2 refs and 1 val (e.g., conn + buf + offset).

type Ctx2V2

type Ctx2V2[T1, T2 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Data [24]byte // 24 bytes
}

Ctx2V2 has 2 refs, 2 vals, 24 bytes data.

func Ctx2V2Of

func Ctx2V2Of[T1, T2 any](sqe *ExtSQE) *Ctx2V2[T1, T2]

Ctx2V2Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals2(). Use when you need 2 refs and 2 vals (e.g., conn + buf + offset + length).

type Ctx2V3

type Ctx2V3[T1, T2 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Val3 int64    // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx2V3 has 2 refs, 3 vals, 16 bytes data.

func Ctx2V3Of

func Ctx2V3Of[T1, T2 any](sqe *ExtSQE) *Ctx2V3[T1, T2]

Ctx2V3Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals3(). Use when you need 2 refs and 3 vals.

type Ctx2V4

type Ctx2V4[T1, T2 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx2V4 has 2 refs, 4 vals, 8 bytes data.

func Ctx2V4Of

func Ctx2V4Of[T1, T2 any](sqe *ExtSQE) *Ctx2V4[T1, T2]

Ctx2V4Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals4(). Use when you need 2 refs and 4 vals.

type Ctx2V5

type Ctx2V5[T1, T2 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
	Val5 int64   // 8 bytes
}

Ctx2V5 has 2 refs, 5 vals, 0 bytes data.

func Ctx2V5Of

func Ctx2V5Of[T1, T2 any](sqe *ExtSQE) *Ctx2V5[T1, T2]

Ctx2V5Of is a shorthand for ViewCtx2[T1,T2](sqe).Vals5(). Use when you need 2 refs and 5 vals.

type Ctx3

type Ctx3[T1, T2, T3 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Data [32]byte // 32 bytes
}

Ctx3 has 3 refs, 0 vals, 32 bytes data.

func Ctx3Of

func Ctx3Of[T1, T2, T3 any](sqe *ExtSQE) *Ctx3[T1, T2, T3]

Ctx3Of is a shorthand for ViewCtx3[T1,T2,T3](sqe).Vals0(). Use when you need 3 refs and 0 vals.

type Ctx3V1

type Ctx3V1[T1, T2, T3 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Val1 int64    // 8 bytes
	Data [24]byte // 24 bytes
}

Ctx3V1 has 3 refs, 1 val, 24 bytes data.

func Ctx3V1Of

func Ctx3V1Of[T1, T2, T3 any](sqe *ExtSQE) *Ctx3V1[T1, T2, T3]

Ctx3V1Of is a shorthand for ViewCtx3[T1,T2,T3](sqe).Vals1(). Use when you need 3 refs and 1 val.

type Ctx3V2

type Ctx3V2[T1, T2, T3 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Val1 int64    // 8 bytes
	Val2 int64    // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx3V2 has 3 refs, 2 vals, 16 bytes data.

func Ctx3V2Of

func Ctx3V2Of[T1, T2, T3 any](sqe *ExtSQE) *Ctx3V2[T1, T2, T3]

Ctx3V2Of is a shorthand for ViewCtx3[T1,T2,T3](sqe).Vals2(). Use when you need 3 refs and 2 vals.

type Ctx3V3

type Ctx3V3[T1, T2, T3 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx3V3 has 3 refs, 3 vals, 8 bytes data.

func Ctx3V3Of

func Ctx3V3Of[T1, T2, T3 any](sqe *ExtSQE) *Ctx3V3[T1, T2, T3]

Ctx3V3Of is a shorthand for ViewCtx3[T1,T2,T3](sqe).Vals3(). Use when you need 3 refs and 3 vals.

type Ctx3V4

type Ctx3V4[T1, T2, T3 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
	Val4 int64   // 8 bytes
}

Ctx3V4 has 3 refs, 4 vals, 0 bytes data.

func Ctx3V4Of

func Ctx3V4Of[T1, T2, T3 any](sqe *ExtSQE) *Ctx3V4[T1, T2, T3]

Ctx3V4Of is a shorthand for ViewCtx3[T1,T2,T3](sqe).Vals4(). Use when you need 3 refs and 4 vals.

type Ctx4

type Ctx4[T1, T2, T3, T4 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Ref4 *T4      // 8 bytes
	Data [24]byte // 24 bytes
}

Ctx4 has 4 refs, 0 vals, 24 bytes data.

func Ctx4Of

func Ctx4Of[T1, T2, T3, T4 any](sqe *ExtSQE) *Ctx4[T1, T2, T3, T4]

Ctx4Of is a shorthand for ViewCtx4[T1,T2,T3,T4](sqe).Vals0(). Use when you need 4 refs and 0 vals.

type Ctx4V1

type Ctx4V1[T1, T2, T3, T4 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Ref4 *T4      // 8 bytes
	Val1 int64    // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx4V1 has 4 refs, 1 val, 16 bytes data.

func Ctx4V1Of

func Ctx4V1Of[T1, T2, T3, T4 any](sqe *ExtSQE) *Ctx4V1[T1, T2, T3, T4]

Ctx4V1Of is a shorthand for ViewCtx4[T1,T2,T3,T4](sqe).Vals1(). Use when you need 4 refs and 1 val.

type Ctx4V2

type Ctx4V2[T1, T2, T3, T4 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx4V2 has 4 refs, 2 vals, 8 bytes data.

func Ctx4V2Of

func Ctx4V2Of[T1, T2, T3, T4 any](sqe *ExtSQE) *Ctx4V2[T1, T2, T3, T4]

Ctx4V2Of is a shorthand for ViewCtx4[T1,T2,T3,T4](sqe).Vals2(). Use when you need 4 refs and 2 vals.

type Ctx4V3

type Ctx4V3[T1, T2, T3, T4 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
	Val3 int64   // 8 bytes
}

Ctx4V3 has 4 refs, 3 vals, 0 bytes data.

func Ctx4V3Of

func Ctx4V3Of[T1, T2, T3, T4 any](sqe *ExtSQE) *Ctx4V3[T1, T2, T3, T4]

Ctx4V3Of is a shorthand for ViewCtx4[T1,T2,T3,T4](sqe).Vals3(). Use when you need 4 refs and 3 vals.

type Ctx5

type Ctx5[T1, T2, T3, T4, T5 any] struct {
	Fn   Handler  // 8 bytes
	Ref1 *T1      // 8 bytes
	Ref2 *T2      // 8 bytes
	Ref3 *T3      // 8 bytes
	Ref4 *T4      // 8 bytes
	Ref5 *T5      // 8 bytes
	Data [16]byte // 16 bytes
}

Ctx5 has 5 refs, 0 vals, 16 bytes data.

func Ctx5Of

func Ctx5Of[T1, T2, T3, T4, T5 any](sqe *ExtSQE) *Ctx5[T1, T2, T3, T4, T5]

Ctx5Of is a shorthand for ViewCtx5[T1,T2,T3,T4,T5](sqe).Vals0(). Use when you need 5 refs and 0 vals.

type Ctx5V1

type Ctx5V1[T1, T2, T3, T4, T5 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Ref5 *T5     // 8 bytes
	Val1 int64   // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx5V1 has 5 refs, 1 val, 8 bytes data.

func Ctx5V1Of

func Ctx5V1Of[T1, T2, T3, T4, T5 any](sqe *ExtSQE) *Ctx5V1[T1, T2, T3, T4, T5]

Ctx5V1Of is a shorthand for ViewCtx5[T1,T2,T3,T4,T5](sqe).Vals1(). Use when you need 5 refs and 1 val.

type Ctx5V2

type Ctx5V2[T1, T2, T3, T4, T5 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Ref5 *T5     // 8 bytes
	Val1 int64   // 8 bytes
	Val2 int64   // 8 bytes
}

Ctx5V2 has 5 refs, 2 vals, 0 bytes data.

func Ctx5V2Of

func Ctx5V2Of[T1, T2, T3, T4, T5 any](sqe *ExtSQE) *Ctx5V2[T1, T2, T3, T4, T5]

Ctx5V2Of is a shorthand for ViewCtx5[T1,T2,T3,T4,T5](sqe).Vals2(). Use when you need 5 refs and 2 vals.

type Ctx6

type Ctx6[T1, T2, T3, T4, T5, T6 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Ref5 *T5     // 8 bytes
	Ref6 *T6     // 8 bytes
	Data [8]byte // 8 bytes
}

Ctx6 has 6 refs, 0 vals, 8 bytes data.

func Ctx6Of

func Ctx6Of[T1, T2, T3, T4, T5, T6 any](sqe *ExtSQE) *Ctx6[T1, T2, T3, T4, T5, T6]

Ctx6Of is a shorthand for ViewCtx6[T1,T2,T3,T4,T5,T6](sqe).Vals0(). Use when you need 6 refs and 0 vals.

type Ctx6V1

type Ctx6V1[T1, T2, T3, T4, T5, T6 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Ref5 *T5     // 8 bytes
	Ref6 *T6     // 8 bytes
	Val1 int64   // 8 bytes
}

Ctx6V1 has 6 refs, 1 val, 0 bytes data.

func Ctx6V1Of

func Ctx6V1Of[T1, T2, T3, T4, T5, T6 any](sqe *ExtSQE) *Ctx6V1[T1, T2, T3, T4, T5, T6]

Ctx6V1Of is a shorthand for ViewCtx6[T1,T2,T3,T4,T5,T6](sqe).Vals1(). Use when you need 6 refs and 1 val.

type Ctx7

type Ctx7[T1, T2, T3, T4, T5, T6, T7 any] struct {
	Fn   Handler // 8 bytes
	Ref1 *T1     // 8 bytes
	Ref2 *T2     // 8 bytes
	Ref3 *T3     // 8 bytes
	Ref4 *T4     // 8 bytes
	Ref5 *T5     // 8 bytes
	Ref6 *T6     // 8 bytes
	Ref7 *T7     // 8 bytes
}

Ctx7 has 7 refs, 0 vals, 0 bytes data.

func Ctx7Of

func Ctx7Of[T1, T2, T3, T4, T5, T6, T7 any](sqe *ExtSQE) *Ctx7[T1, T2, T3, T4, T5, T6, T7]

Ctx7Of is a shorthand for ViewCtx7[T1,T2,T3,T4,T5,T6,T7](sqe).Vals0(). Use when you need 7 refs and 0 vals.

type CtxRefs0

type CtxRefs0 struct {
	// contains filtered or unexported fields
}

CtxRefs0 is a view into ExtSQE with 0 refs. Use its methods to select the number of vals (0-7).

func ViewCtx

func ViewCtx(sqe *ExtSQE) CtxRefs0

ViewCtx creates a CtxRefs0 for accessing the UserData with 0 refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

Example:

c := uring.ViewCtx(sqe).Vals3()  // 0 refs, 3 vals
c.Val1 = timestamp
c.Val2 = flags
c.Val3 = seqNum

func (CtxRefs0) Vals0

func (v CtxRefs0) Vals0() *Ctx0

Vals0 returns a Ctx0 pointer (0 refs, 0 vals, 56B data).

func (CtxRefs0) Vals1

func (v CtxRefs0) Vals1() *Ctx0V1

Vals1 returns a Ctx0V1 pointer (0 refs, 1 val, 48B data).

func (CtxRefs0) Vals2

func (v CtxRefs0) Vals2() *Ctx0V2

Vals2 returns a Ctx0V2 pointer (0 refs, 2 vals, 40B data).

func (CtxRefs0) Vals3

func (v CtxRefs0) Vals3() *Ctx0V3

Vals3 returns a Ctx0V3 pointer (0 refs, 3 vals, 32B data).

func (CtxRefs0) Vals4

func (v CtxRefs0) Vals4() *Ctx0V4

Vals4 returns a Ctx0V4 pointer (0 refs, 4 vals, 24B data).

func (CtxRefs0) Vals5

func (v CtxRefs0) Vals5() *Ctx0V5

Vals5 returns a Ctx0V5 pointer (0 refs, 5 vals, 16B data).

func (CtxRefs0) Vals6

func (v CtxRefs0) Vals6() *Ctx0V6

Vals6 returns a Ctx0V6 pointer (0 refs, 6 vals, 8B data).

func (CtxRefs0) Vals7

func (v CtxRefs0) Vals7() *Ctx0V7

Vals7 returns a Ctx0V7 pointer (0 refs, 7 vals, 0B data).

type CtxRefs1

type CtxRefs1[T1 any] struct {
	// contains filtered or unexported fields
}

CtxRefs1 is a view into ExtSQE with 1 ref. Use its methods to select the number of vals (0-6).

func ViewCtx1

func ViewCtx1[T1 any](sqe *ExtSQE) CtxRefs1[T1]

ViewCtx1 creates a CtxRefs1 for accessing the UserData with 1 typed ref. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

Example:

c := uring.ViewCtx1[Connection](sqe).Vals1()  // 1 ref, 1 val
c.Val1 = time.Now().UnixNano()

func (CtxRefs1[T1]) Vals0

func (v CtxRefs1[T1]) Vals0() *Ctx1[T1]

Vals0 returns a Ctx1 pointer (1 ref, 0 vals, 48B data).

func (CtxRefs1[T1]) Vals1

func (v CtxRefs1[T1]) Vals1() *Ctx1V1[T1]

Vals1 returns a Ctx1V1 pointer (1 ref, 1 val, 40B data).

func (CtxRefs1[T1]) Vals2

func (v CtxRefs1[T1]) Vals2() *Ctx1V2[T1]

Vals2 returns a Ctx1V2 pointer (1 ref, 2 vals, 32B data).

func (CtxRefs1[T1]) Vals3

func (v CtxRefs1[T1]) Vals3() *Ctx1V3[T1]

Vals3 returns a Ctx1V3 pointer (1 ref, 3 vals, 24B data).

func (CtxRefs1[T1]) Vals4

func (v CtxRefs1[T1]) Vals4() *Ctx1V4[T1]

Vals4 returns a Ctx1V4 pointer (1 ref, 4 vals, 16B data).

func (CtxRefs1[T1]) Vals5

func (v CtxRefs1[T1]) Vals5() *Ctx1V5[T1]

Vals5 returns a Ctx1V5 pointer (1 ref, 5 vals, 8B data).

func (CtxRefs1[T1]) Vals6

func (v CtxRefs1[T1]) Vals6() *Ctx1V6[T1]

Vals6 returns a Ctx1V6 pointer (1 ref, 6 vals, 0B data).

type CtxRefs2

type CtxRefs2[T1, T2 any] struct {
	// contains filtered or unexported fields
}

CtxRefs2 is a view into ExtSQE with 2 refs. Use its methods to select the number of vals (0-5).

func ViewCtx2

func ViewCtx2[T1, T2 any](sqe *ExtSQE) CtxRefs2[T1, T2]

ViewCtx2 creates a CtxRefs2 for accessing the UserData with 2 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

Example:

c := uring.ViewCtx2[Connection, Buffer](sqe).Vals2()  // 2 refs, 2 vals
c.Val1 = offset
c.Val2 = length

func (CtxRefs2[T1, T2]) Vals0

func (v CtxRefs2[T1, T2]) Vals0() *Ctx2[T1, T2]

Vals0 returns a Ctx2 pointer (2 refs, 0 vals, 40B data).

func (CtxRefs2[T1, T2]) Vals1

func (v CtxRefs2[T1, T2]) Vals1() *Ctx2V1[T1, T2]

Vals1 returns a Ctx2V1 pointer (2 refs, 1 val, 32B data).

func (CtxRefs2[T1, T2]) Vals2

func (v CtxRefs2[T1, T2]) Vals2() *Ctx2V2[T1, T2]

Vals2 returns a Ctx2V2 pointer (2 refs, 2 vals, 24B data).

func (CtxRefs2[T1, T2]) Vals3

func (v CtxRefs2[T1, T2]) Vals3() *Ctx2V3[T1, T2]

Vals3 returns a Ctx2V3 pointer (2 refs, 3 vals, 16B data).

func (CtxRefs2[T1, T2]) Vals4

func (v CtxRefs2[T1, T2]) Vals4() *Ctx2V4[T1, T2]

Vals4 returns a Ctx2V4 pointer (2 refs, 4 vals, 8B data).

func (CtxRefs2[T1, T2]) Vals5

func (v CtxRefs2[T1, T2]) Vals5() *Ctx2V5[T1, T2]

Vals5 returns a Ctx2V5 pointer (2 refs, 5 vals, 0B data).

type CtxRefs3

type CtxRefs3[T1, T2, T3 any] struct {
	// contains filtered or unexported fields
}

CtxRefs3 is a view into ExtSQE with 3 refs. Use its methods to select the number of vals (0-4).

func ViewCtx3

func ViewCtx3[T1, T2, T3 any](sqe *ExtSQE) CtxRefs3[T1, T2, T3]

ViewCtx3 creates a CtxRefs3 for accessing the UserData with 3 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

func (CtxRefs3[T1, T2, T3]) Vals0

func (v CtxRefs3[T1, T2, T3]) Vals0() *Ctx3[T1, T2, T3]

Vals0 returns a Ctx3 pointer (3 refs, 0 vals, 32B data).

func (CtxRefs3[T1, T2, T3]) Vals1

func (v CtxRefs3[T1, T2, T3]) Vals1() *Ctx3V1[T1, T2, T3]

Vals1 returns a Ctx3V1 pointer (3 refs, 1 val, 24B data).

func (CtxRefs3[T1, T2, T3]) Vals2

func (v CtxRefs3[T1, T2, T3]) Vals2() *Ctx3V2[T1, T2, T3]

Vals2 returns a Ctx3V2 pointer (3 refs, 2 vals, 16B data).

func (CtxRefs3[T1, T2, T3]) Vals3

func (v CtxRefs3[T1, T2, T3]) Vals3() *Ctx3V3[T1, T2, T3]

Vals3 returns a Ctx3V3 pointer (3 refs, 3 vals, 8B data).

func (CtxRefs3[T1, T2, T3]) Vals4

func (v CtxRefs3[T1, T2, T3]) Vals4() *Ctx3V4[T1, T2, T3]

Vals4 returns a Ctx3V4 pointer (3 refs, 4 vals, 0B data).

type CtxRefs4

type CtxRefs4[T1, T2, T3, T4 any] struct {
	// contains filtered or unexported fields
}

CtxRefs4 is a view into ExtSQE with 4 refs. Use its methods to select the number of vals (0-3).

func ViewCtx4

func ViewCtx4[T1, T2, T3, T4 any](sqe *ExtSQE) CtxRefs4[T1, T2, T3, T4]

ViewCtx4 creates a CtxRefs4 for accessing the UserData with 4 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

func (CtxRefs4[T1, T2, T3, T4]) Vals0

func (v CtxRefs4[T1, T2, T3, T4]) Vals0() *Ctx4[T1, T2, T3, T4]

Vals0 returns a Ctx4 pointer (4 refs, 0 vals, 24B data).

func (CtxRefs4[T1, T2, T3, T4]) Vals1

func (v CtxRefs4[T1, T2, T3, T4]) Vals1() *Ctx4V1[T1, T2, T3, T4]

Vals1 returns a Ctx4V1 pointer (4 refs, 1 val, 16B data).

func (CtxRefs4[T1, T2, T3, T4]) Vals2

func (v CtxRefs4[T1, T2, T3, T4]) Vals2() *Ctx4V2[T1, T2, T3, T4]

Vals2 returns a Ctx4V2 pointer (4 refs, 2 vals, 8B data).

func (CtxRefs4[T1, T2, T3, T4]) Vals3

func (v CtxRefs4[T1, T2, T3, T4]) Vals3() *Ctx4V3[T1, T2, T3, T4]

Vals3 returns a Ctx4V3 pointer (4 refs, 3 vals, 0B data).

type CtxRefs5

type CtxRefs5[T1, T2, T3, T4, T5 any] struct {
	// contains filtered or unexported fields
}

CtxRefs5 is a view into ExtSQE with 5 refs. Use its methods to select the number of vals (0-2).

func ViewCtx5

func ViewCtx5[T1, T2, T3, T4, T5 any](sqe *ExtSQE) CtxRefs5[T1, T2, T3, T4, T5]

ViewCtx5 creates a CtxRefs5 for accessing the UserData with 5 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

func (CtxRefs5[T1, T2, T3, T4, T5]) Vals0

func (v CtxRefs5[T1, T2, T3, T4, T5]) Vals0() *Ctx5[T1, T2, T3, T4, T5]

Vals0 returns a Ctx5 pointer (5 refs, 0 vals, 16B data).

func (CtxRefs5[T1, T2, T3, T4, T5]) Vals1

func (v CtxRefs5[T1, T2, T3, T4, T5]) Vals1() *Ctx5V1[T1, T2, T3, T4, T5]

Vals1 returns a Ctx5V1 pointer (5 refs, 1 val, 8B data).

func (CtxRefs5[T1, T2, T3, T4, T5]) Vals2

func (v CtxRefs5[T1, T2, T3, T4, T5]) Vals2() *Ctx5V2[T1, T2, T3, T4, T5]

Vals2 returns a Ctx5V2 pointer (5 refs, 2 vals, 0B data).

type CtxRefs6

type CtxRefs6[T1, T2, T3, T4, T5, T6 any] struct {
	// contains filtered or unexported fields
}

CtxRefs6 is a view into ExtSQE with 6 refs. Use its methods to select the number of vals (0-1).

func ViewCtx6

func ViewCtx6[T1, T2, T3, T4, T5, T6 any](sqe *ExtSQE) CtxRefs6[T1, T2, T3, T4, T5, T6]

ViewCtx6 creates a CtxRefs6 for accessing the UserData with 6 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

func (CtxRefs6[T1, T2, T3, T4, T5, T6]) Vals0

func (v CtxRefs6[T1, T2, T3, T4, T5, T6]) Vals0() *Ctx6[T1, T2, T3, T4, T5, T6]

Vals0 returns a Ctx6 pointer (6 refs, 0 vals, 8B data).

func (CtxRefs6[T1, T2, T3, T4, T5, T6]) Vals1

func (v CtxRefs6[T1, T2, T3, T4, T5, T6]) Vals1() *Ctx6V1[T1, T2, T3, T4, T5, T6]

Vals1 returns a Ctx6V1 pointer (6 refs, 1 val, 0B data).

type CtxRefs7

type CtxRefs7[T1, T2, T3, T4, T5, T6, T7 any] struct {
	// contains filtered or unexported fields
}

CtxRefs7 is a view into ExtSQE with 7 refs. The only option is Vals0 (no vals available).

func ViewCtx7

func ViewCtx7[T1, T2, T3, T4, T5, T6, T7 any](sqe *ExtSQE) CtxRefs7[T1, T2, T3, T4, T5, T6, T7]

ViewCtx7 creates a CtxRefs7 for accessing the UserData with 7 typed refs. Raw overlay; caller must ensure pointer-free refs or keep roots externally.

func (CtxRefs7[T1, T2, T3, T4, T5, T6, T7]) Vals0

func (v CtxRefs7[T1, T2, T3, T4, T5, T6, T7]) Vals0() *Ctx7[T1, T2, T3, T4, T5, T6, T7]

Vals0 returns a Ctx7 pointer (7 refs, 0 vals, 0B data).

type DirectCQE

type DirectCQE struct {
	Res      int32  // Completion result (bytes transferred or negative errno)
	Flags    uint32 // CQE flags (IORING_CQE_F_*)
	Op       uint8  // IORING_OP_* opcode
	SQEFlags uint8  // SQE flags (IOSQE_*)
	BufGroup uint16 // Buffer group index
	FD       iofd.FD
}

DirectCQE is a compact copied CQE for Direct mode operations. It stores the completion result and unpacked context fields without mode checking or pointer indirection.

Use WaitDirect when every submitted operation uses Direct mode (PackDirect). That path skips the generic Wait/CQEView mode dispatch per CQE.

Layout: 16 bytes on supported platforms.

func (*DirectCQE) BufID

func (c *DirectCQE) BufID() uint16

BufID returns the buffer ID from CQE flags. Only valid when HasBuffer() returns true.

func (*DirectCQE) Err

func (c *DirectCQE) Err() error

Err decodes c.Res as a completion error.

func (*DirectCQE) HasBuffer

func (c *DirectCQE) HasBuffer() bool

HasBuffer reports whether a buffer ID is available.

func (*DirectCQE) HasMore

func (c *DirectCQE) HasMore() bool

HasMore reports whether more completions are coming (multishot).

func (*DirectCQE) IsNotification

func (c *DirectCQE) IsNotification() bool

IsNotification reports whether this is a zero-copy notification CQE.

func (*DirectCQE) IsSuccess

func (c *DirectCQE) IsSuccess() bool

IsSuccess reports whether the operation completed successfully.

type EpollEvent

type EpollEvent struct {
	Events uint32

	Fd  int32
	Pad int32
	// contains filtered or unexported fields
}

EpollEvent represents an epoll event. Layout matches struct epoll_event in Linux.

type ExtCQE