Honestly, I've been running a little low on passion for side-projects and true deep dives lately. Lots of reasons, most of which you can probably guess if you also live in the US or work in tech.
But I'm still pretty obsessed with graph layout. Basically I love d3-force - force-directed layout, but I think that it's used everywhere and isn't the right choice for everything. And graph layout is catnip to computer scientists so there's a ton of cool research being written about alternative algorithms.

I think this graph could be better. That's a real load-bearing could though: Mike's work on d3 is big for a reason: he focused on and solved a lot of really hard problems that others were scared of.

But on the New York subway, I see their subway maps and I am transfixed. Hand-drawn charts look different than what a computer can generate. Old charts are just amazing.
There are a bunch of things about beautiful charts that I appreciate:
So I've been reading some papers.
I liked A Walk On The Wild Side: a Shape-First Methodology for Orthogonal Drawings. Some of the takeways:
But so far my favorite is HOLA: Human-Like Orthogonal Network Layout. The results look spectacular. The gist is:
Sidenote: when reading through the ogdf documentation I saw that they use earcut, made by Mapbox! Cool to see that foundational work like that is so widely adopted, and liberal open source licensing makes it possible.
HOLA was written in 2015, so I went looking for more recent work, and found Praline which has a Java implementation by the authors.
And then A Simple Pipeline for Orthogonal Graph Drawing, which cites PRALINE and HOLA as examples and has really nice output. I was hopeful that the 'simple' in that paper meant that it was simple to implement, which… not sure. There's a Scala implementation.
It's amazing to me that some of this really cutting-edge work sits in repositories on GitHub with 6 stars (one of them mine) when they represent so much real thought and effort. Of course the real product is the paper, and the real reward is PhDs, tenure, respect of their peers, and citations. But still!
Then the same authors as "A Simple Pipeline" - Tim Hegemann and Alexander Wolff - wrote Storylines with a Protagonist which as an online demo which implements a lot of the nice parts of subway-map drawing!
I'm having fun following along with fancy graph-drawing algorithms! Some questions that I am looking to answer next:
TEXT for all text stuff, and citext for case-insensitive text. There is no advantage to char or varchar, avoid them.bytea is good.created_at column that defaults to NOW(). You'll need it eventually.servicename_base58-check-encoded-random-bytes is good.Today I've been implementing a new feature, and in keeping with the general trends, I've been trying to implement new code in Effect. This is mostly low-drama, but I needed to start connecting Effect to tRPC.
A theme is that Effect's documentation is pretty weak on details about integrations, partly because Effect has its own systems that you could use instead. In this case, @effect/rpc, but that module is still on a 0.x release and has no documentation at all on the Effect website, so for now, people are going to use tRPC instead.
Platform layers like tRPC and React Router are a big part of why we adopted Result types in the first place. We've been using neverthrow for a long time and are now switching to Effect. The common ingredient that matters is that with Result types, you get a type like Result<ErrorType, SuccessType>, so you know what errors a function could produce. In TypeScript with thrown exceptions, you don't know what kinds of errors can happen. Then, ideally, we're able to map those errors into platform-specific errors: tRPC wants instances of TRPCError, React Router wants Response objects, Fastify wants its own error type, etc.
So previously we had this code to interoperate between neverthrow and tRPC:
export function mapErrorToTRPC(error: MappableErrors) {
if (error instanceof UnexpectedError) {
Sentry.captureException(error);
}
if (error instanceof RedirectError) {
// This should never happen, tRPC does not support redirects.
return new TRPCError({ code: "INTERNAL_SERVER_ERROR", cause: error });
}
if (error instanceof MethodNotAllowedError) {
// TRPC uses an unstandard phrase for this error
return new TRPCError({ code: "METHOD_NOT_SUPPORTED", cause: error });
}
// console.error(error);
return new TRPCError({ code: error.code, cause: error });
}
/**
* Unwrap a result, throwing a tRPC-specific error
*/
export function unwrapForTrpc<T>(
result: Result<T, MappableErrors> | ResultAsync<T, MappableErrors>
) {
return result.match(
(t) => t,
(error) => {
throw mapErrorToTRPC(error);
}
);
}So: we take a result, if there's an error, throw it, if it's successful, just return the value. This makes tRPC happy. This function is then injected with tRPC middleware so that query and mutation files get this function as a part of the context object instead of having to import the function directly.
So today was solving that puzzle for Effect, which looks like this:
export async function effectTrpc<A, E>(effect: Effect.Effect<A, E>) {
const exit = await Effect.runPromiseExit(effect);
return Exit.match(exit, {
onFailure(exit) {
return Option.match(Cause.failureOption(exit), {
onNone() {
throw new TRPCError({
code: "INTERNAL_SERVER_ERROR",
message: "Unexpected error type",
});
},
onSome(f) {
throw mapErrorToTRPC(f as MappableErrors);
},
});
},
onSuccess(v) {
return v;
},
});
}It's a bit more intensive than the neverthrow example because there's the matter of how Exit works in Effect - when an Effect fails here, we get an Exit which is a Failure, which has a Cause inside of it which we want. Some failures don't have causes, so we have to handle that case too.
Is it beautiful? No. But I suspect that:
The main alternative path here is some kind of higher level function that would let us define tRPC routes with Effects directly. This would be nifty but I fear that level of abstraction - one can definitely get too fancy with this kind of thing, and we're trying to be mostly incremental.
Chris Krycho's Friendly Little Wrapper Types article reminded me of my own One Way To Represent Things article from 2021. That idea has been bouncing around my head ever since.
Basically "rich types." There are lots of places where an email or a color is represented as a string, and a duration is represented as an integer, but what if they weren't? It would help with type-safety and open up the opportunity for more convenient methods: color.toRGB() or toRGB(color) without having to re-parse everything every time. How do we get to this reality?
Reflecting on both pieces makes me wonder about parsing as the lever for adoption. Chris's piece has an example of an email type that you'd construct and that would give you some structured errors if it wasn't correctly formatted.
The rise of Zod and other parsers that give you both typesafety and runtime assurances is one of the biggest changes in JavaScript/TypeScript in years. I remember the bad old days when we'd have to manually parse inputs, use jsonschema for everything, and when a lot of applications were just naïve about the shape of their input data. Now, basically everything uses Zod or something like it, and it's a huge lever: Zod is what makes tRPC possible with so little boilerplate.
So the idea is: what if parser libraries produced branded types by default? Zod already supports basic typesystem-only branded types, and so does Effect. Maybe there's an opportunity for a higher-level library built on one of these that brands types like UUIDs, emails, colors, IP addresses, passwords, usernames, and more by default?
This could enable something like
import p from "parser";
const type = p.object({ color: p.color(), uuid: p.uuid() });
const result = type.parse({
color: '#ff00ff',
uuid: '5426a36a-bfd4-46ff-9066-c7b5fac0bf9c',
email: '[email protected]'
});
// Color methods
result.color.hsl();
// Or in functional style, but checking that the brand is right
toHSL(result.color);
// UUID methods
result.uuid.version();
uuidVersion(result.uuid);Wouldn't that be nice?
It does dawn on me now that what I'm talking about are codecs. Effect's Schema is already a codec-based system, but it's a new idea to Zod. Maybe we just start using that a lot more?
Forgive me for ranting a bit here: today I burned many hours debugging something stupid. This is a light follow-up to my last rant about how OpenTelemetry feels bad to use.
Also, this is for a very niche audience: people who use OpenTelemetry, Sentry, and Honeycomb.
In short:
Val Town is a TypeScript application: there's a backend, which is based on Fastify, and a frontend, based on React Router 7. They could be combined, but that's besides the point of this. The React Router 7 part also uses tRPC to make queries and mutations nice to do on the frontend.
When I looked into our Honeycomb queries for anything related to tRPC, I saw chaos. There'd be some part of a useful trace, like the tRPC method's name and the database queries in it, and then the GET request above that and then a missing parent span. And since the parent span is missing, I'd get tons of traces all bunched into one. This is a common-enough problem that Honeycomb has debugging documentation for it.
Unfortunately, it's none of the the things Honeycomb mentions. I tried using Claude Code for this too, and it led me through 7 promising solutions, none of which worked.
The problem is really dumb. We use Sentry, and Sentry likes to really get involved in tracing, even though I've tried hard to turn off all of its tracing features. In particular, it loves to muck up OpenTelemetry configuration on the backend as well as tracing on the frontend.
The thing is: by default Sentry also tries to do distributed tracing: it's the default! That sucks!
Which means that every time that there was a fetch() call in our frontend application, Sentry would go in there, instrument the function, and inject a Baggage header that contained a trace ID. And then, on the backend, would take that header and put it on OpenTelemetry traces.
In theory, this lets you connect frontend sessions with backend bugs and see through your whole stack. In practice, it means that if you are:
Then Honeycomb is broken by default for any backend routes that are on the same domain as the frontend and are requested via fetch().
This is because Sentry creates a trace on the frontend, sends its trace id to the backend along with the fetch request() intending to create a span that spans the frontend and backend. But since that never goes to Honeycomb, it's missing, from Honeycomb's perspective. Which messes up all your queries.
The fix is disabling distributed tracing:
Sentry.init({
dsn: "…",
// Overwrite the defaults to ensure no trace headers are sent
tracePropagationTargets: [],
});And then make sure you don't have SentryPropagator in your OpenTelemetry configuration on the backend, because that's what receives the Baggage header and connects it to traces.
So: I sincerely wish that you find this blog post before you lose a day to this problem.
We shipped another bit of Effect-powered code today, for the Val Town user-facing logs pipeline. The initial issue was that we've been seeing high usage on the ClickHouse database used to power logs, and I suspect that part of this is that we're spamming the database with tons of single-row writes. ClickHouse really likes batch writes and doesn't like high-frequency writes, so I wanted to do some basic batching.
Effect has a lot of nice tools for this kind of problem: in particular, Queues, and Stream operations are really nice expansive API surfaces for this kind of thing.
export const globalLogQueue = Effect.runSync(Queue.unbounded<LogLine>());
runtime.runFork(
Stream.fromQueue(globalLogQueue).pipe(
Stream.groupedWithin(20, "1 seconds"),
Stream.runForEach((batch) =>
Effect.tryPromise(() =>
clickhouseClient.insert<LogLine>({
table: logLinesTableName,
values: Chunk.toArray(batch)
})
).pipe(
Effect.catchAll((e) => Effect.sync(() => Sentry.captureException(e)))
)
)
)
);This is pretty nice stuff: you have a global queue, and can add stuff to it like this:
Queue.offer(globalLogQueue, newLogLines);Pretty nifty APIs here, I think. And I like that there are bounded, dropping, and sliding queues right off the shelf. Effect definitely is built for a world with unexpected scale points, in which any potential buffer or array could fill up unexpectedly, and it comes ready with solutions.
For all of this timing-related code, I've found the Effect Duration data type to be really, really nifty, and also found the first instance where the pipe() function came in handy. For example, I wanted to update a rate limit counter but set it to decrement in 7 days, at the end of the day. This used to involve a lot of magic numbers and date math to calculate seconds in a day and do multiplication, but after the Effect refactor it looks like:
const expireAt = pipe(
DateTime.unsafeNow(),
DateTime.endOf("day"),
DateTime.add({ days: 7 }),
DateTime.toEpochMillis
);Pretty slick, and I think this is fairly readable - take the current time, then shift it to the end of the day, add 7 days, and convert to milliseconds. Really slick. This is part of the really simple API surface of Effect, but it's nice.
The Streams documentation is okay, but it's oddly short about interoperability: like how do you convert from existing kind of streams in other ecosystems like Node.js and the web, into Effect streams? This is a common theme in the documentation. See: Stream documentation doesn't mention interop · Issue #1254.
Anyway, the short story is that you can convert a Node.js Stream into an Effect Stream by treating it as an async iterable, because it is one of those. And you can convert streams into web streams using Stream.toReadableStreamEffect.
I started reading some of effect-solutions and learned about Effect.fn. This method helps with one of the core weirdnesses of Effect, that so much of the documentation doesn't really show you functions with arguments, which are a big part of programming. So a classic Effect pattern that I've been using is like this:
export function downloadFile(
key: string,
location: string
): Effect.Effect<
void,
NotFoundError | PlatformError | UnknownException,
FileSystem.FileSystem
> {
return Effect.gen(function* () {
const fs = yield* FileSystem.FileSystem;
let dest = localPath(key);
yield* downloadAndFileToTemp(key);
yield* fs.copyFile(dest, location);
}).pipe(Effect.withSpan("downloadFile"));
}That is, a function that takes arguments and then returns an Effect that uses those arguments from the function's scope. The Effect doesn't take arguments. This feels a little kludgy to me, and Effect.fn makes it simpler:
export const downloadFile = Effect.fn("downloadFile")(function* (
key: string,
location: string
) {
const fs = yield* FileSystem.FileSystem;
let dest = localPath(key);
yield* downloadAndFileToTemp(key);
yield* fs.copyFile(dest, location);
});
This is an improvement! Instead of manually tracing, it automatically traces, and I have one less function to declare.
There are downsides, though:
void, NotFoundError…). There are no examples I can find of doing the same for Effect.fn, which has a totally different call signature. I opened an issue for this one.Effect.fn and use flow. Which is fine, but is another learning curve relative to the way that piping is taught everywhere else, using .pipe().Unfortunately, this also led me into another battle with OpenTelemetry.
Boy, I don't like the OpenTelemetry SDK. Let me count the ways:
The scattered opentelemetry packages are the worst case of dependency hell. In our project, we have 14 OpenTelemetry dependencies (api, core, exporter-trace-otlp-proto, instrumentation-aws-sdk, instrumentation-express, instrumentation-http, instrumentation-ioredis, instrumentation-net, instrumentation-pg, resources, sdk-node, sdk-trace-base, sdk-trace-web, semantic-conventions). Great. The node_modules/@opentelemetry directory weighs 146MB. Not a great start: and some of this is due to some modules distributing triple sources - esm, esnext, and src, three copies of everything.
Then you've got the peer dependencies. Sentry relies heavily on OpenTelemetry now and wants to integrate tightly with it, so it has @sentry/opentelemetry which has a peerDependencies on four OpenTelemetry modules. And then @effect/opentelemetry does the same - it has peer dependencies on eight OpenTelemetry modules.
Then you've got the versioning scheme: OpenTelemetry's experimental modules are versioned 0.x, so they get stricter matching behavior: specifying @opentelemetry/sdk-logs: "^0.203.0" in your package.json does not permit upgrades to 0.208.0 because it's a 0.x package. This makes the peerDependency situation even worse because it makes it harder for two packages with peerDependencies onto sdk-logs to resolve to the same version.
And then, to be clear, the consequences of duplicate OpenTelemetry modules are not minor! If conflicting peerDependency and root requirements cause more than one copy of otel modules to be installed, you'll get separate tracers and disconnected tracers, mucking up your whole observability situation.
Now: look at me, a big critic. Why don't I fix this stuff? How could this be better? I don't know exactly, but I wish that:
0 versions for so many OpenTelemetry APIs.And to be clear - I'm just talking about the SDK. The protocol itself is another matter, which David Cramer has written an extensive piece about.
Another day, another batch of refactoring Val Town internals to Effect. Today has been pretty good: haven't hit many bugs, and the docs have been helpful.
flowI had a few methods in this codebase for which, in TypeScript before the refactor, we tolerated failure. If the Promise rejected, we'd send it to Sentry with Sentry.captureException and keep going. I found a way to do this by combining Effect.tapError (to run a side-effect on errors) and Effect.ignore (to swallow the error and avoid short-circuiting when it happens), but I wanted to combine them. My naive solution was to use pipe(), but pipe() expects a value as its first argument, so a pipe would look like
const add = x => x + 1;
const subtract = x => x - 1;
const piped = pipe(2, add, subtract);
// Same as
subtract(add(2))But instead I wanted something that yielded a function. So flow does that:
const add = x => x + 1;
const subtract = x => x - 1;
const flowed = flowed(add, subtract);
// Same as
x => subtract(add(x))Works great! Here's how it looked in this specific case:
/**
* If we have an operation that should be able to fail without short-circuiting,
* but we want to log the error to Sentry, we use this.
*/
function tapAndIgnore(message: string) {
// NOTE: flow intead of pipe here because pipe expects the first Effect
// given to be something that produces a value.
return flow(
Effect.tapError((error) =>
Effect.sync(() =>
Sentry.captureException(new Error(message, { cause: error }))
)
),
Effect.ignore
);
}Then I can pipe an effect into this:
// Some Effect
.pipe(tapAndIgnore('Lockfile generation failed'))There are a bunch of driving factors for my dive into Effect. For reference, we've been using neverthrow, and while it's been pretty great for business logic, it hasn't been able to help much with the messy internals of Val Town, which is where a lot of the complexity lies. Some of the stuff that drew me to Effect:
The last one is pretty interesting: basically if you have a function in JavaScript like:
async function someTask() {
const a = await getA();
const b = await getB(a);
const c = await getC(b);
return c;
}And you want to limit the amount of time this function has to run, there is no good way. p-timeout was my old solution, but that only throws an error and ignores the value of the function: the function still runs. AbortSignal is great and gets us a lot further, but an implementation with AbortSignal that allows this method to be fully interruptible would look like:
async function someTask(signal) {
signal.throwIfAborted();
const a = await getA();
signal.throwIfAborted();
const b = await getB(a);
signal.throwIfAborted();
const c = await getC(b);
return c;
}This is obviously verbose and not great. In contrast, Effects are interruptible by default, though you can mark them as not interruptible. The Effect version of this would be just
async function someTask(signal) {
return Effect.gen(function* () {
const a = yield* getA();
const b = yield* getB(a);
const c = yield* getC(b);
return c;
})
}Pretty nifty. One of the big realizations I've had recently is that all async work should be bound both in parallelism and time. In other words: using Promise.all and running an arbitrary number of async tasks at the same time is bad by default because you'll eventually run out of some resource like ports or file descriptors, and most async operations like database queries or network requests should have a timeout by default. Otherwise you always eventually get bad behavior.
Effect plays into this pretty nicely: it makes logic cancellable and easy to bound with timeouts.
Thanks for reading! Previously in Effect devlogs:
So basically we had a method that ended up with the call signature:
getDenoLockfile(key: string, location: string):
Effect<
void,
NotFoundError | PlatformError | UnknownException,
FileSystem>Which means that it uses the FileSystem platform for Effect. FileSystem is technically unstable but it's pretty nice and I think a lot of code uses it. But this presents a challenge in tests: now where my tests said something like
await getDenoLockfile("key0", dest);Now they need to look like
await Effect.runPromise(getDenoLockfile("key0", dest).pipe(Effect.provide(NodeContext.layer));This is inelegant. (sidenote: No, I'm not using @effect/vitest yet because it's missing automatic fixtures and we heavily use them.)
So, how do you make this a little more idiomatic? A custom Runtime, which lets me define a layer in one place and then run Effects with that layer defined:
// At the top, before tests
const runtime = ManagedRuntime.make(NodeContext.layer);
// Now you can run with that runtime and provide NodeContext
await runtime.runPromise(getDenoLockfile("key0", dest));This took a little while to figure out but there is kind of an example in the docs - I just wish the docs didn't always start with abstract definitions and they paid a little more attention to common usecases.
There's also the mental leap of merging layers here: I want both NodeContext and our NodeSdkLive (OpenTelemetry) layers, so this becomes:
export const runtime = ManagedRuntime.make(
Layer.mergeAll(NodeContext.layer, NodeSdkLive)
);One thing I really like about Effect is that it does context really well. For example, I wrote a subtle bug and wanted to add some debugging to a function. Usually this means writing debugging code and then ripping it out because even if you use a nice logger (we've been using pino) and set the loglevel to debug, turning on debug logs shows debug logs everywhere. But with Effect, you can do something like this:
Effect.gen(function* () {
yield* Effect.logDebug(`Lockfile existed`);
}).pipe(Logger.withMinimumLogLevel(LogLevel.Debug))And then when you're done debugging, keep the logDebug statements but drop the .pipe(Logger.withMinimumLogLevel(LogLevel.Debug)) and that turns logs off. It's pretty nifty: no longer do I feel like writing nice debug log messages is purely experimental effort, now I can keep them around and turn them on & off via configuration on a per-function basis.
Issues:
Previously:
I got an hour or two of focus to garden Placemark, and got this done:
I've been thinking about Placemark in the following ways, priority-wise:
My previous Placemark updates: switching to Vite, cleanup.