| examples | ||
| build.zig | ||
| build.zig.zon | ||
| main.zig | ||
| README.md | ||
| steamboat-willie-mickey_archive.torrent | ||
bencode
A robust, standards-compliant implementation of Bencode. It makes efficient use of Writergate.
Add to your project with Zig package manager
Add it to your build.zig.zon using:
$ zig fetch --save https://codeberg.org/cancername/bencode
Make it available to your module in build.zig:
mod.addImport("bencode", b.dependency("bencode", .{ .target = target, .optimize = optimize }).module("bencode"));
Examples
Dump first file as JSON
The fundamental data type of this library is Value. To parse a whole
value in one go use Value.decode:
Code
const std = @import("std");
const bencode = @import("bencode");
pub fn main() !void {
var gpa_state = std.heap.DebugAllocator(.{}){};
const gpa = gpa_state.allocator();
// To decode all bencode streams supported by this library, the buffer must be at least 21 bytes.
// You can also use std.Io.Reader.fixed(slice) to decode right from memory. In this case, you
// can also set `allocate_strings` to `false`, as long as you keep the memory alive.
var buf: [8 << 10]u8 = undefined;
var reader = std.fs.File.stdin().reader(&buf);
var value = try bencode.Value.decode(&reader.interface, .{.gpa = gpa});
defer value.deinit(gpa); // Unlike with std.json, granular memory management is a thing
// convenient accessor functions returning errors
{
const info = try value.get("info");
const files = try info.get("files");
const first_file = try files.at(0);
std.debug.print("{f}\n", .{std.json.fmt(first_file, .{ .whitespace = .indent_2 })});
}
}
$ zig build examples
Run it:
$ ./zig-out/bin/dump_first_file <steamboat-willie-mickey_archive.torrent
{
"crc32": "5a4fc499",
"length": 1882130429,
"md5": "6455314dd4c0878ecaba47eaf996b5c2",
"mtime": "1672612537",
"path": [
"01 - Steamboat Willie.mkv"
],
"sha1": "96ce88bac65ac5fe7aa5239fb508ff79cd24141b"
}
Configuration
This library can be configured in various ways.
The default limits are small to avoid running out of memory, adjust
them in decoder.Config if you want to parse large
files:
max_nesting_levelmax_string_lenmax_list_elementsmax_dict_elements
Alternatively, you can set them to the maximum and use a limiting allocator.
Options that control spec compliance:
unsorted_dict_behavior, the default is to sort and ignoreduplicate_key_behavior, the default is to errornoncanonical_integer_behavior, the default is to allow
The encoder always gives canonical, compliant output.
Options that control logging:
log_unsorted_dictlog_duplicate_keylog_noncanonical_integer
Future work
- There's some kind of canonicalization issue remaining where non-canonical encodings are erroneously accepted.
- A builder-style API.