Databend

Deliver log data to the Databend database

status: beta delivery: at-least-once acknowledgements: yes egress: batch state: stateless
input: logs

Requirements

Databend version >= 1.2.216 is required.

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "databend",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "endpoint": "databend://localhost:8000/default?sslmode=disable",
      "table": "mytable"
    }
  }
}
[sinks.my_sink_id]
type = "databend"
inputs = [ "my-source-or-transform-id" ]
endpoint = "databend://localhost:8000/default?sslmode=disable"
table = "mytable"
sinks:
  my_sink_id:
    type: databend
    inputs:
      - my-source-or-transform-id
    endpoint: databend://localhost:8000/default?sslmode=disable
    table: mytable
{
  "sinks": {
    "my_sink_id": {
      "type": "databend",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "compression": "none",
      "database": "mydatabase",
      "endpoint": "databend://localhost:8000/default?sslmode=disable",
      "missing_field_as": "NULL",
      "table": "mytable"
    }
  }
}
[sinks.my_sink_id]
type = "databend"
inputs = [ "my-source-or-transform-id" ]
compression = "none"
database = "mydatabase"
endpoint = "databend://localhost:8000/default?sslmode=disable"
missing_field_as = "NULL"
table = "mytable"
sinks:
  my_sink_id:
    type: databend
    inputs:
      - my-source-or-transform-id
    compression: none
    database: mydatabase
    endpoint: databend://localhost:8000/default?sslmode=disable
    missing_field_as: "NULL"
    table: mytable

acknowledgements

optional object

Controls how acknowledgements are handled for this sink.

See End-to-end Acknowledgements for more information on how event acknowledgement is handled.

Controls whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source that supports end-to-end acknowledgements that is connected to that sink waits for events to be acknowledged by all connected sinks before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global acknowledgements configuration.

auth

optional object
The username and password to authenticate with. Overrides the username and password in DSN.

auth.auth

required object
The AWS authentication configuration.
Relevant when: strategy = "aws"
auth.auth.access_key_id
required string literal
The AWS access key ID.
Examples
"AKIAIOSFODNN7EXAMPLE"
auth.auth.assume_role
required string literal
The ARN of an IAM role to assume.
Examples
"arn:aws:iam::123456789098:role/my_role"
auth.auth.credentials_file
required string literal
Path to the credentials file.
Examples
"/my/aws/credentials"
auth.auth.external_id
optional string literal
The optional unique external ID in conjunction with role to assume.
Examples
"randomEXAMPLEidString"
auth.auth.imds
optional object
Configuration for authenticating with AWS through IMDS.
Connect timeout for IMDS.
default: 1(seconds)
Number of IMDS retries for fetching tokens and metadata.
default: 4
Read timeout for IMDS.
default: 1(seconds)

Timeout for successfully loading any credentials, in seconds.

Relevant when the default credentials chain or assume_role is used.

Examples
30
auth.auth.profile
optional string literal

The credentials profile to use.

Used to select AWS credentials from a provided credentials file.

Examples
"develop"
default: default
auth.auth.region
optional string literal

The AWS region to send STS requests to.

If not set, this defaults to the configured region for the service itself.

Examples
"us-west-2"
auth.auth.secret_access_key
required string literal
The AWS secret access key.
Examples
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
auth.auth.session_name
optional string literal

The optional RoleSessionName is a unique session identifier for your assumed role.

Should be unique per principal or reason. If not set, the session name is autogenerated like assume-role-provider-1736428351340

Examples
"vector-indexer-role"
auth.auth.session_token
optional string literal
The AWS session token. See AWS temporary credentials
Examples
"AQoDYXdz...AQoDYXdz..."

auth.password

required string literal
The basic authentication password.
Relevant when: strategy = "basic"
Examples
"${PASSWORD}"
"password"

auth.service

required string literal
The AWS service name to use for signing.
Relevant when: strategy = "aws"

auth.strategy

required string literal enum
The authentication strategy to use.
Enum options
OptionDescription
awsAWS authentication.
basic

Basic authentication.

The username and password are concatenated and encoded using base64.

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.

customCustom Authorization Header Value, will be inserted into the headers as Authorization: < value >
Examples
"aws"
"basic"
"bearer"
"custom"

auth.token

required string literal
The bearer authentication token.
Relevant when: strategy = "bearer"

auth.user

required string literal
The basic authentication username.
Relevant when: strategy = "basic"
Examples
"${USERNAME}"
"username"

auth.value

required string literal
Custom string value of the Authorization header
Relevant when: strategy = "custom"
Examples
"${AUTH_HEADER_VALUE}"
"CUSTOM_PREFIX ${TOKEN}"

batch

optional object
Event batching behavior.

batch.max_bytes

optional uint

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized or compressed.

default: 1e+07(bytes)

batch.max_events

optional uint
The maximum size of a batch before it is flushed.

batch.timeout_secs

optional float
The maximum age of a batch before it is flushed.
default: 1(seconds)

buffer

optional object

Configures the buffering behavior for this sink.

More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.

buffer.max_events

optional uint
The maximum number of events allowed in the buffer.
Relevant when: type = "memory"
default: 500

buffer.max_size

required uint

The maximum allowed amount of allocated memory the buffer can hold.

If type = "disk" then must be at least ~256 megabytes (268435488 bytes).

buffer.type

optional string literal enum
The type of buffer to use.
Enum options
OptionDescription
disk

Events are buffered on disk.

This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes.

Data is synchronized to disk every 500ms.

memory

Events are buffered in memory.

This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes.

default: memory

buffer.when_full

optional string literal enum
Event handling behavior when a buffer is full.
Enum options
OptionDescription
block

Wait for free space in the buffer.

This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge.

drop_newest

Drops the event instead of waiting for free space in buffer.

The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

default: block

compression

optional string literal enum
Compression configuration.
Enum options string literal
OptionDescription
gzipGzip compression.
noneNo compression.
default: none

database

optional string literal
The database that contains the table that data is inserted into. Overrides the database in DSN.
Examples
"mydatabase"

encoding

optional object
Configures how events are encoded into raw bytes.

encoding.codec

optional string literal enum
The codec to use for encoding events.
Enum options
OptionDescription
csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.

jsonEncodes an event as JSON.
default: json

encoding.csv

required object
The CSV Serializer Options.
Relevant when: codec = "csv"
Sets the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to 8192 bytes (8KB).
default: 8192
encoding.csv.delimiter
optional ascii_char
The field delimiter to use when writing CSV.
default: ,

Enables double quote escapes.

This is enabled by default, but you can disable it. When disabled, quotes in field data are escaped instead of doubled.

default: true
encoding.csv.escape
optional ascii_char

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this, double_quotes needs to be disabled as well; otherwise, this setting is ignored.

default: "
encoding.csv.fields
required [string]

Configures the fields that are encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output for that field is an empty string.

Values of type Array, Object, and Regex are not supported, and the output for any of these types is an empty string.

encoding.csv.quote
optional ascii_char
The quote character to use when writing CSV.
default: "
encoding.csv.quote_style
optional string literal enum
The quoting style to use when writing CSV data.
Enum options
OptionDescription
alwaysAlways puts quotes around every field.
necessaryPuts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter, or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
neverNever writes quotes, even if it produces invalid CSV data.
non_numericPuts quotes around all fields that are non-numeric. This means that when writing a field that does not parse as a valid float or integer, quotes are used even if they aren’t strictly necessary.
default: necessary

encoding.except_fields

optional [string]
List of fields that are excluded from the encoded event.

encoding.json

optional object
Options for the JsonSerializer.
Relevant when: codec = "json"
Whether to use pretty JSON formatting.
default: false

encoding.metric_tag_values

optional string literal enum

Controls how metric tag values are encoded.

When set to single, only the last non-bare value of tags are displayed with the metric. When set to full, all metric tags are exposed as separate assignments.

Relevant when: codec = "json"
Enum options
OptionDescription
fullAll tags are exposed as arrays of either string or null values.
singleTag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
default: single

encoding.only_fields

optional [string]
List of fields that are included in the encoded event.

encoding.timestamp_format

optional string literal enum
Format used for timestamp fields.
Enum options
OptionDescription
rfc3339Represent the timestamp as a RFC 3339 timestamp.
unixRepresent the timestamp as a Unix timestamp.
unix_floatRepresent the timestamp as a Unix timestamp in floating point.
unix_msRepresent the timestamp as a Unix timestamp in milliseconds.
unix_nsRepresent the timestamp as a Unix timestamp in nanoseconds.
unix_usRepresent the timestamp as a Unix timestamp in microseconds.

endpoint

required string literal
The DSN of the Databend server.
Examples
"databend://localhost:8000/default?sslmode=disable"

healthcheck

optional object
Healthcheck configuration.

healthcheck.enabled

optional bool
Whether or not to check the health of the sink when Vector starts up.
default: true

inputs

required [string]

A list of upstream source or transform IDs.

Wildcards (*) are supported.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

missing_field_as

optional string literal enum
Defines how missing fields are handled for NDJson. Refer to https://docs.databend.com/sql/sql-reference/file-format-options#null_field_as
Enum options string literal
OptionDescription
ERRORGenerates an error if a missing field is encountered.
FIELD_DEFAULTUses the default value of the field for missing fields.
NULLInterprets missing fields as NULL values. An error will be generated for non-nullable fields.
TYPE_DEFAULTUses the default value of the field’s data type for missing fields.
default: NULL

proxy

optional object

Proxy configuration.

Configure to proxy traffic through an HTTP(S) proxy when making external requests.

Similar to common proxy configuration convention, you can set different proxies to use based on the type of traffic being proxied. You can also set specific hosts that should not be proxied.

proxy.enabled

optional bool
Enables proxying support.
default: true

proxy.http

optional string literal

Proxy endpoint to use when proxying HTTP traffic.

Must be a valid URI string.

Examples
"http://foo.bar:3128"

proxy.https

optional string literal

Proxy endpoint to use when proxying HTTPS traffic.

Must be a valid URI string.

Examples
"http://foo.bar:3128"

proxy.no_proxy

optional [string]

A list of hosts to avoid proxying.

Multiple patterns are allowed:

PatternExample match
Domain namesexample.com matches requests to example.com
Wildcard domains.example.com matches requests to example.com and its subdomains
IP addresses127.0.0.1 matches requests to 127.0.0.1
CIDR blocks192.168.0.0/16 matches requests to any IP addresses in this range
Splat* matches all hosts

request

optional object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, and retry behavior.

Note that the retry backoff policy follows the Fibonacci sequence.

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than 0 and less than 1. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note: The new limit is rounded down after applying this ratio.

default: 0.9

The weighting of new measurements compared to older measurements.

Valid values are greater than 0 and less than 1.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.

default: 0.4

The initial concurrency limit to use. If not specified, the initial limit is 1 (no concurrency).

Datadog recommends setting this value to your service’s average limit if you’re seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the adaptive_concurrency_limit metric.

default: 1

The maximum concurrency limit.

The adaptive request concurrency limit does not go above this bound. This is put in place as a safeguard.

default: 200

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to 0, and reasonable values range from 1.0 to 3.0.

When calculating the past RTT average, a secondary “deviation” value is also computed that indicates how variable those values are. That deviation is used when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.

default: 2.5

request.concurrency

optional string literal enum uint

Configuration for outbound request concurrency.

This can be set either to one of the below enum values or to a positive integer, which denotes a fixed concurrency limit.

Enum options
OptionDescription
adaptiveConcurrency is managed by Vector’s Adaptive Request Concurrency feature.
none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.

default: adaptive
The time window used for the rate_limit_num option.
default: 1(seconds)
The maximum number of requests allowed within the rate_limit_duration_secs time window.
default: 9.223372036854776e+18(requests)