Tom MacWright

[email protected]

Graph layout

Honestly, I've been running a little low on passion for side-projects and true deep dives lately. Lots of reasons, most of which you can probably guess if you also live in the US or work in tech.

But I'm still pretty obsessed with graph layout. Basically I love d3-force - force-directed layout, but I think that it's used everywhere and isn't the right choice for everything. And graph layout is catnip to computer scientists so there's a ton of cool research being written about alternative algorithms.

I think this graph could be better. That's a real load-bearing could though: Mike's work on d3 is big for a reason: he focused on and solved a lot of really hard problems that others were scared of.

But on the New York subway, I see their subway maps and I am transfixed. Hand-drawn charts look different than what a computer can generate. Old charts are just amazing.

There are a bunch of things about beautiful charts that I appreciate:

  • Orthogonal or semi-orthogonal layouts: all of the edges are horizontal, vertical, or in some cases, diagonal. Subway maps have diagonals and I think it looks amazing. As far as I can tell so far though, most chart implementations are either freeform or orthogonal.'
  • Edge bundling. See circuit diagrams. If there are a bunch of edges following similar routes, they should line up and potentially be combined into one. Here's an example of edge bundling with a circular layout.
  • Good nodes. For example, subway charts don't have all of the subway lines pinching together for each station, but instead they show the station as kind of a horizontal joining line.
  • Symmetry. It's human, but it's pretty nice to see charts that prioritize making things symmetrical, even if that's not optimal in another direction.

So I've been reading some papers.

I liked A Walk On The Wild Side: a Shape-First Methodology for Orthogonal Drawings. Some of the takeways:

  • Most orthogonal drawing is based on the Topology-Shape-Metrics paradigm, which tries hard to avoid links crossing over each other.
  • Instead, they care more about minimizing bends in the links, which they think looks better and matters more than crossings.
  • They do so with a SAT solver (oooh!) so they can defer some of the really hard algorithmic work to that.

But so far my favorite is HOLA: Human-Like Orthogonal Network Layout. The results look spectacular. The gist is:

  • Instead of optimizing for something they think is nice, they tested what people would do if they laid out graphs themselves.
  • They came up with five criteria for good graphs (trees placed outside, aesthetic bend points, compactness, grid-like node placement, symmetry - though I recommend just looking at the paper because there are good illustrations for these).
  • I haven't read through the implementation but I'm guessing it's pretty complicated to achieve all of these goals. I think the main implementation is in libdialect. There's a JavaScript port of sorts called cola.js, which has a gridified example but I don't think that's actually HOLA. Also there's a python implementation of HOLA, but that relies on the C++ adaptograms implementation so I don't know if it would be any easier to port.
  • Bracketing the feelings involved, potentially a TypeScript port of libdialect is possible via LLMs. I suspect that the lack of real integers and the different performance profile might be tricky, though.

Sidenote: when reading through the ogdf documentation I saw that they use earcut, made by Mapbox! Cool to see that foundational work like that is so widely adopted, and liberal open source licensing makes it possible.

HOLA was written in 2015, so I went looking for more recent work, and found Praline which has a Java implementation by the authors.

And then A Simple Pipeline for Orthogonal Graph Drawing, which cites PRALINE and HOLA as examples and has really nice output. I was hopeful that the 'simple' in that paper meant that it was simple to implement, which… not sure. There's a Scala implementation.

It's amazing to me that some of this really cutting-edge work sits in repositories on GitHub with 6 stars (one of them mine) when they represent so much real thought and effort. Of course the real product is the paper, and the real reward is PhDs, tenure, respect of their peers, and citations. But still!

Then the same authors as "A Simple Pipeline" - Tim Hegemann and Alexander Wolff - wrote Storylines with a Protagonist which as an online demo which implements a lot of the nice parts of subway-map drawing!


I'm having fun following along with fancy graph-drawing algorithms! Some questions that I am looking to answer next:

  • If I use an LLM to translate Scala to TypeScript and popularize one of these, am I the baddie? It feels kind of bad to think about it.
  • Are force-directed graphs successful because they're fast and general and that puts a ceiling on these nicer but potentially slower and less general alternatives?
  • Do graph drawing algorithms with eight-directional links (diagonals) exist? I can't figure out what to search for and I haven't found any evidence that this is supported yet.

Misc engineering truisms

  • Data structures are the foundation of programming
    • IDs are identifiers. Names are not identifiers. Do not use names as identifiers.
    • The fewer databases you use the better. Consistency between datasets is hard, and it's painful to make requests against multiple datasources and handle multiple kinds of failure. Postgres goes a long way.
    • Either both compute and databases are in the same region, or both are geographically distributed. Having a big hop between a server and its database is a recipe for bad performance.
    • Network locality is really important. Put things close to each other, in the same region or datacenter if you can.
    • It is much more common for applications to be slow because of slow database queries than it is for them to be slow because of inefficient algorithms in your code. If you're good at knowing how indexes and queries work, it'll help you a lot in your career.
    • Similarly, it is more common for applications to be slow because of bad algorithms than bad programming languages. 'Slow' languages like Ruby are fast enough for most things if code is written intelligently.
  • Scale is hard to anticipate
    • Everything everywhere should have a limit from day one. Any unlimited text input is an arbitrary storage mechanism. Any async task without a timeout will run forever eventually.
  • The internet is an adversarial medium
    • All websites with user-generated content are vectors for SEO farming, phishing, and other malevolent behavior. Every company that hosts one will have to do content moderation.
    • All websites with a not-at-cost computing component will be used for crypto mining. Every company that provides this has to fight it. See: GitHub Actions, even that was used for crypto-mining.
  • Postgres stuff
    • Use TEXT for all text stuff, and citext for case-insensitive text. There is no advantage to char or varchar, avoid them.
    • Don't store binary data as base64'ed TEXT or hex TEXT or whatever, store it as binary data. bytea is good.
    • Basically every table should have a created_at column that defaults to NOW(). You'll need it eventually.
  • Misc lessons learned
    • API tokens should be prefixed or identifiable so they can be identified by security scanners. Don't use UUIDs as api tokens. Something like servicename_base58-check-encoded-random-bytes is good.
  • Speed of iteration is really important
    • Deploys, CI, and release should all be automated as much as possible, but no more than that.
  • Interfaces
    • Most of the time, power and simplicity are a direct tradeoff: powerful interfaces are complex, simple interfaces are restrictive. Aiming to create something powerful and simple without a plan for how you'll achieve that is going to fail. Getting more power without complexity is the hardest and most worthwhile activity.
    • Most "intuition" is really "familiarity." There are popular interfaces that are hard to learn and look weird, but are so commonplace that people are used to them and consider them friendly. There are friendly interfaces that are so rare that people consider them intimidating.
  • Tests
    • What tests are testing for can be wrong, and you'll end up enforcing incorrect logic for the long term. Making tests readable and then re-reading them from time to time is a good counterweight.
    • Test coverage is wonderful if it's possible, but there are many applications where really you can't get full test coverage with good ROI.
  • Abstractions that are usually worth it.
    • Result/Either types are worth their weight most of the time, if you're in JavaScript. It makes more sense to build with them from the start rather than putting them in later.
    • An 'integrations/' directory where you instantiate SDKs for external services is usually good in the long run.
    • Validating all your environment variables at startup is essential - in JavaScript, envsafe, Effect, zod are all good options for this. It is painful to crash after deployment because some function relied on an env var that was missing.

Effect notes: tRPC

Today I've been implementing a new feature, and in keeping with the general trends, I've been trying to implement new code in Effect. This is mostly low-drama, but I needed to start connecting Effect to tRPC.

A theme is that Effect's documentation is pretty weak on details about integrations, partly because Effect has its own systems that you could use instead. In this case, @effect/rpc, but that module is still on a 0.x release and has no documentation at all on the Effect website, so for now, people are going to use tRPC instead.

Platform layers like tRPC and React Router are a big part of why we adopted Result types in the first place. We've been using neverthrow for a long time and are now switching to Effect. The common ingredient that matters is that with Result types, you get a type like Result<ErrorType, SuccessType>, so you know what errors a function could produce. In TypeScript with thrown exceptions, you don't know what kinds of errors can happen. Then, ideally, we're able to map those errors into platform-specific errors: tRPC wants instances of TRPCError, React Router wants Response objects, Fastify wants its own error type, etc.

So previously we had this code to interoperate between neverthrow and tRPC:

export function mapErrorToTRPC(error: MappableErrors) {
  if (error instanceof UnexpectedError) {
    Sentry.captureException(error);
  }
  if (error instanceof RedirectError) {
    // This should never happen, tRPC does not support redirects.
    return new TRPCError({ code: "INTERNAL_SERVER_ERROR", cause: error });
  }
  if (error instanceof MethodNotAllowedError) {
    // TRPC uses an unstandard phrase for this error
    return new TRPCError({ code: "METHOD_NOT_SUPPORTED", cause: error });
  }
  // console.error(error);
  return new TRPCError({ code: error.code, cause: error });
}

/**
 * Unwrap a result, throwing a tRPC-specific error
 */
export function unwrapForTrpc<T>(
  result: Result<T, MappableErrors> | ResultAsync<T, MappableErrors>
) {
  return result.match(
    (t) => t,
    (error) => {
      throw mapErrorToTRPC(error);
    }
  );
}

So: we take a result, if there's an error, throw it, if it's successful, just return the value. This makes tRPC happy. This function is then injected with tRPC middleware so that query and mutation files get this function as a part of the context object instead of having to import the function directly.

So today was solving that puzzle for Effect, which looks like this:

export async function effectTrpc<A, E>(effect: Effect.Effect<A, E>) {
  const exit = await Effect.runPromiseExit(effect);
  return Exit.match(exit, {
    onFailure(exit) {
      return Option.match(Cause.failureOption(exit), {
        onNone() {
          throw new TRPCError({
            code: "INTERNAL_SERVER_ERROR",
            message: "Unexpected error type",
          });
        },
        onSome(f) {
          throw mapErrorToTRPC(f as MappableErrors);
        },
      });
    },
    onSuccess(v) {
      return v;
    },
  });
}

It's a bit more intensive than the neverthrow example because there's the matter of how Exit works in Effect - when an Effect fails here, we get an Exit which is a Failure, which has a Cause inside of it which we want. Some failures don't have causes, so we have to handle that case too.

Is it beautiful? No. But I suspect that:

  • Most applications are not beautiful greenfield environments and have this kind of adapter code.
  • The adapter code is always a little gross.

The main alternative path here is some kind of higher level function that would let us define tRPC routes with Effects directly. This would be nifty but I fear that level of abstraction - one can definitely get too fancy with this kind of thing, and we're trying to be mostly incremental.


Issues

  • I haven't filed any new issues but unfortunately most of the issues I have filed are stalled. There's a PR for reverse cron iteration, but it hasn't been merged yet. That PR got merged!
  • Following along with Effect development, it's nice that there's a lot of effort on Effect 4, but I hope that it doesn't cause a lot of churn in the ecosystem. And I hope it's the last big rewrite for a long time.

Maybe rich parsers are the way to introduce rich types

Chris Krycho's Friendly Little Wrapper Types article reminded me of my own One Way To Represent Things article from 2021. That idea has been bouncing around my head ever since.

Basically "rich types." There are lots of places where an email or a color is represented as a string, and a duration is represented as an integer, but what if they weren't? It would help with type-safety and open up the opportunity for more convenient methods: color.toRGB() or toRGB(color) without having to re-parse everything every time. How do we get to this reality?

Reflecting on both pieces makes me wonder about parsing as the lever for adoption. Chris's piece has an example of an email type that you'd construct and that would give you some structured errors if it wasn't correctly formatted.

The rise of Zod and other parsers that give you both typesafety and runtime assurances is one of the biggest changes in JavaScript/TypeScript in years. I remember the bad old days when we'd have to manually parse inputs, use jsonschema for everything, and when a lot of applications were just naïve about the shape of their input data. Now, basically everything uses Zod or something like it, and it's a huge lever: Zod is what makes tRPC possible with so little boilerplate.

So the idea is: what if parser libraries produced branded types by default? Zod already supports basic typesystem-only branded types, and so does Effect. Maybe there's an opportunity for a higher-level library built on one of these that brands types like UUIDs, emails, colors, IP addresses, passwords, usernames, and more by default?

This could enable something like

import p from "parser";

const type = p.object({ color: p.color(), uuid: p.uuid() });

const result = type.parse({
	color: '#ff00ff',
	uuid: '5426a36a-bfd4-46ff-9066-c7b5fac0bf9c',
	email: '[email protected]'
});

// Color methods
result.color.hsl();
// Or in functional style, but checking that the brand is right
toHSL(result.color);

// UUID methods
result.uuid.version();
uuidVersion(result.uuid);

Wouldn't that be nice?

It does dawn on me now that what I'm talking about are codecs. Effect's Schema is already a codec-based system, but it's a new idea to Zod. Maybe we just start using that a lot more?

Sentry's distributed tracing causes missing parent spans in Honeycomb

Forgive me for ranting a bit here: today I burned many hours debugging something stupid. This is a light follow-up to my last rant about how OpenTelemetry feels bad to use.

Also, this is for a very niche audience: people who use OpenTelemetry, Sentry, and Honeycomb.

In short:

Val Town is a TypeScript application: there's a backend, which is based on Fastify, and a frontend, based on React Router 7. They could be combined, but that's besides the point of this. The React Router 7 part also uses tRPC to make queries and mutations nice to do on the frontend.

When I looked into our Honeycomb queries for anything related to tRPC, I saw chaos. There'd be some part of a useful trace, like the tRPC method's name and the database queries in it, and then the GET request above that and then a missing parent span. And since the parent span is missing, I'd get tons of traces all bunched into one. This is a common-enough problem that Honeycomb has debugging documentation for it.

Unfortunately, it's none of the the things Honeycomb mentions. I tried using Claude Code for this too, and it led me through 7 promising solutions, none of which worked.

The problem is really dumb. We use Sentry, and Sentry likes to really get involved in tracing, even though I've tried hard to turn off all of its tracing features. In particular, it loves to muck up OpenTelemetry configuration on the backend as well as tracing on the frontend.

The thing is: by default Sentry also tries to do distributed tracing: it's the default! That sucks!

Which means that every time that there was a fetch() call in our frontend application, Sentry would go in there, instrument the function, and inject a Baggage header that contained a trace ID. And then, on the backend, would take that header and put it on OpenTelemetry traces.

In theory, this lets you connect frontend sessions with backend bugs and see through your whole stack. In practice, it means that if you are:

  1. Using Honeycomb for backend tracing (but not frontend)
  2. Using Sentry on both your backend and frontend

Then Honeycomb is broken by default for any backend routes that are on the same domain as the frontend and are requested via fetch().

This is because Sentry creates a trace on the frontend, sends its trace id to the backend along with the fetch request() intending to create a span that spans the frontend and backend. But since that never goes to Honeycomb, it's missing, from Honeycomb's perspective. Which messes up all your queries.

The fix is disabling distributed tracing:

Sentry.init({
  dsn: "…",
  // Overwrite the defaults to ensure no trace headers are sent
  tracePropagationTargets: [],
});

And then make sure you don't have SentryPropagator in your OpenTelemetry configuration on the backend, because that's what receives the Baggage header and connects it to traces.

So: I sincerely wish that you find this blog post before you lose a day to this problem.

Effect notes: streams and such

We shipped another bit of Effect-powered code today, for the Val Town user-facing logs pipeline. The initial issue was that we've been seeing high usage on the ClickHouse database used to power logs, and I suspect that part of this is that we're spamming the database with tons of single-row writes. ClickHouse really likes batch writes and doesn't like high-frequency writes, so I wanted to do some basic batching.

Queue

Effect has a lot of nice tools for this kind of problem: in particular, Queues, and Stream operations are really nice expansive API surfaces for this kind of thing.

export const globalLogQueue = Effect.runSync(Queue.unbounded<LogLine>());

runtime.runFork(
  Stream.fromQueue(globalLogQueue).pipe(
    Stream.groupedWithin(20, "1 seconds"),
    Stream.runForEach((batch) =>
      Effect.tryPromise(() =>
        clickhouseClient.insert<LogLine>({
          table: logLinesTableName,
          values: Chunk.toArray(batch)
        })
      ).pipe(
        Effect.catchAll((e) => Effect.sync(() => Sentry.captureException(e)))
      )
    )
  )
);

This is pretty nice stuff: you have a global queue, and can add stuff to it like this:

Queue.offer(globalLogQueue, newLogLines);

Pretty nifty APIs here, I think. And I like that there are bounded, dropping, and sliding queues right off the shelf. Effect definitely is built for a world with unexpected scale points, in which any potential buffer or array could fill up unexpectedly, and it comes ready with solutions.


Duration

For all of this timing-related code, I've found the Effect Duration data type to be really, really nifty, and also found the first instance where the pipe() function came in handy. For example, I wanted to update a rate limit counter but set it to decrement in 7 days, at the end of the day. This used to involve a lot of magic numbers and date math to calculate seconds in a day and do multiplication, but after the Effect refactor it looks like:

const expireAt = pipe(
  DateTime.unsafeNow(),
  DateTime.endOf("day"),
  DateTime.add({ days: 7 }),
  DateTime.toEpochMillis
);

Pretty slick, and I think this is fairly readable - take the current time, then shift it to the end of the day, add 7 days, and convert to milliseconds. Really slick. This is part of the really simple API surface of Effect, but it's nice.


Issues

The Streams documentation is okay, but it's oddly short about interoperability: like how do you convert from existing kind of streams in other ecosystems like Node.js and the web, into Effect streams? This is a common theme in the documentation. See: Stream documentation doesn't mention interop · Issue #1254.

Anyway, the short story is that you can convert a Node.js Stream into an Effect Stream by treating it as an async iterable, because it is one of those. And you can convert streams into web streams using Stream.toReadableStreamEffect.

Effect notes: fn

TIL about Effect.fn

I started reading some of effect-solutions and learned about Effect.fn. This method helps with one of the core weirdnesses of Effect, that so much of the documentation doesn't really show you functions with arguments, which are a big part of programming. So a classic Effect pattern that I've been using is like this:

export function downloadFile(
  key: string,
  location: string
): Effect.Effect<
  void,
  NotFoundError | PlatformError | UnknownException,
  FileSystem.FileSystem
> {
  return Effect.gen(function* () {
    const fs = yield* FileSystem.FileSystem;
    let dest = localPath(key);
    yield* downloadAndFileToTemp(key);
    yield* fs.copyFile(dest, location);
  }).pipe(Effect.withSpan("downloadFile"));
}

That is, a function that takes arguments and then returns an Effect that uses those arguments from the function's scope. The Effect doesn't take arguments. This feels a little kludgy to me, and Effect.fn makes it simpler:

export const downloadFile = Effect.fn("downloadFile")(function* (
  key: string,
  location: string
) {
  const fs = yield* FileSystem.FileSystem;
  let dest = localPath(key);
  yield* downloadAndFileToTemp(key);
  yield* fs.copyFile(dest, location);
});

This is an improvement! Instead of manually tracing, it automatically traces, and I have one less function to declare.

There are downsides, though:

  • In the first example, I defined some explicit types for the return, requirements, and errors of the effect (the part that says void, NotFoundError…). There are no examples I can find of doing the same for Effect.fn, which has a totally different call signature. I opened an issue for this one.
  • If you want to pipe the function through some stuff, you need to provide a magic second argument to Effect.fn and use flow. Which is fine, but is another learning curve relative to the way that piping is taught everywhere else, using .pipe().

Unfortunately, this also led me into another battle with OpenTelemetry.

I really don't like OpenTelemetry

Boy, I don't like the OpenTelemetry SDK. Let me count the ways:

The scattered opentelemetry packages are the worst case of dependency hell. In our project, we have 14 OpenTelemetry dependencies (api, core, exporter-trace-otlp-proto, instrumentation-aws-sdk, instrumentation-express, instrumentation-http, instrumentation-ioredis, instrumentation-net, instrumentation-pg, resources, sdk-node, sdk-trace-base, sdk-trace-web, semantic-conventions). Great. The node_modules/@opentelemetry directory weighs 146MB. Not a great start: and some of this is due to some modules distributing triple sources - esm, esnext, and src, three copies of everything.

Then you've got the peer dependencies. Sentry relies heavily on OpenTelemetry now and wants to integrate tightly with it, so it has @sentry/opentelemetry which has a peerDependencies on four OpenTelemetry modules. And then @effect/opentelemetry does the same - it has peer dependencies on eight OpenTelemetry modules.

Then you've got the versioning scheme: OpenTelemetry's experimental modules are versioned 0.x, so they get stricter matching behavior: specifying @opentelemetry/sdk-logs: "^0.203.0" in your package.json does not permit upgrades to 0.208.0 because it's a 0.x package. This makes the peerDependency situation even worse because it makes it harder for two packages with peerDependencies onto sdk-logs to resolve to the same version.

And then, to be clear, the consequences of duplicate OpenTelemetry modules are not minor! If conflicting peerDependency and root requirements cause more than one copy of otel modules to be installed, you'll get separate tracers and disconnected tracers, mucking up your whole observability situation.

Now: look at me, a big critic. Why don't I fix this stuff? How could this be better? I don't know exactly, but I wish that:

  • They stopped using 0 versions for so many OpenTelemetry APIs.
  • They stopped distributing packages in triplicate.
  • OpenTelemetry packages were more macro and less micro so there were fewer of them to have to wrangle.
  • NPM has better warnings and tooling for identifying when duplicate dependencies are installed.

And to be clear - I'm just talking about the SDK. The protocol itself is another matter, which David Cramer has written an extensive piece about.

Effect notes: flow and cancellation

Another day, another batch of refactoring Val Town internals to Effect. Today has been pretty good: haven't hit many bugs, and the docs have been helpful.

flow

I had a few methods in this codebase for which, in TypeScript before the refactor, we tolerated failure. If the Promise rejected, we'd send it to Sentry with Sentry.captureException and keep going. I found a way to do this by combining Effect.tapError (to run a side-effect on errors) and Effect.ignore (to swallow the error and avoid short-circuiting when it happens), but I wanted to combine them. My naive solution was to use pipe(), but pipe() expects a value as its first argument, so a pipe would look like

const add = x => x + 1;
const subtract = x => x - 1;
const piped = pipe(2, add, subtract);
// Same as
subtract(add(2))

But instead I wanted something that yielded a function. So flow does that:

const add = x => x + 1;
const subtract = x => x - 1;
const flowed = flowed(add, subtract);
// Same as
x => subtract(add(x))

Works great! Here's how it looked in this specific case:

/**
 * If we have an operation that should be able to fail without short-circuiting,
 * but we want to log the error to Sentry, we use this.
 */
function tapAndIgnore(message: string) {
  // NOTE: flow intead of pipe here because pipe expects the first Effect
  // given to be something that produces a value.
  return flow(
    Effect.tapError((error) =>
      Effect.sync(() =>
        Sentry.captureException(new Error(message, { cause: error }))
      )
    ),
    Effect.ignore
  );
}

Then I can pipe an effect into this:

// Some Effect
.pipe(tapAndIgnore('Lockfile generation failed'))

Cancellation

There are a bunch of driving factors for my dive into Effect. For reference, we've been using neverthrow, and while it's been pretty great for business logic, it hasn't been able to help much with the messy internals of Val Town, which is where a lot of the complexity lies. Some of the stuff that drew me to Effect:

  • The generator syntax is nice and terse.
  • Platforms make working with Node APIs nicer.
  • Observability & scheduling are really nice to have at this level. Using the OpenTelemetry SDK directly is not fun.
  • It has an idea of cancellation.

The last one is pretty interesting: basically if you have a function in JavaScript like:

async function someTask() {
   const a = await getA();
   const b = await getB(a);
   const c = await getC(b);
   return c;
}

And you want to limit the amount of time this function has to run, there is no good way. p-timeout was my old solution, but that only throws an error and ignores the value of the function: the function still runs. AbortSignal is great and gets us a lot further, but an implementation with AbortSignal that allows this method to be fully interruptible would look like:

async function someTask(signal) {
   signal.throwIfAborted();
   const a = await getA();
   signal.throwIfAborted();
   const b = await getB(a);
   signal.throwIfAborted();
   const c = await getC(b);
   return c;
}

This is obviously verbose and not great. In contrast, Effects are interruptible by default, though you can mark them as not interruptible. The Effect version of this would be just

async function someTask(signal) {
   return Effect.gen(function* () {
      const a = yield* getA();
      const b = yield* getB(a);
      const c = yield* getC(b);
      return c;
   })
}

Pretty nifty. One of the big realizations I've had recently is that all async work should be bound both in parallelism and time. In other words: using Promise.all and running an arbitrary number of async tasks at the same time is bad by default because you'll eventually run out of some resource like ports or file descriptors, and most async operations like database queries or network requests should have a timeout by default. Otherwise you always eventually get bad behavior.

Effect plays into this pretty nicely: it makes logic cancellable and easy to bound with timeouts.


Thanks for reading! Previously in Effect devlogs:

Effect notes: runtimes and logging

I rewrote a method into Effect and now it needs context: how I use ManagedRuntime

So basically we had a method that ended up with the call signature:

getDenoLockfile(key: string, location: string):
	Effect<
		void,
		NotFoundError | PlatformError | UnknownException,
		FileSystem>

Which means that it uses the FileSystem platform for Effect. FileSystem is technically unstable but it's pretty nice and I think a lot of code uses it. But this presents a challenge in tests: now where my tests said something like

await getDenoLockfile("key0", dest);

Now they need to look like

await Effect.runPromise(getDenoLockfile("key0", dest).pipe(Effect.provide(NodeContext.layer));

This is inelegant. (sidenote: No, I'm not using @effect/vitest yet because it's missing automatic fixtures and we heavily use them.)

So, how do you make this a little more idiomatic? A custom Runtime, which lets me define a layer in one place and then run Effects with that layer defined:

// At the top, before tests
const runtime = ManagedRuntime.make(NodeContext.layer);

// Now you can run with that runtime and provide NodeContext
await runtime.runPromise(getDenoLockfile("key0", dest));

This took a little while to figure out but there is kind of an example in the docs - I just wish the docs didn't always start with abstract definitions and they paid a little more attention to common usecases.

There's also the mental leap of merging layers here: I want both NodeContext and our NodeSdkLive (OpenTelemetry) layers, so this becomes:

export const runtime = ManagedRuntime.make(
  Layer.mergeAll(NodeContext.layer, NodeSdkLive)
);

Effect joys: logging

One thing I really like about Effect is that it does context really well. For example, I wrote a subtle bug and wanted to add some debugging to a function. Usually this means writing debugging code and then ripping it out because even if you use a nice logger (we've been using pino) and set the loglevel to debug, turning on debug logs shows debug logs everywhere. But with Effect, you can do something like this:

Effect.gen(function* () {
	yield* Effect.logDebug(`Lockfile existed`);
}).pipe(Logger.withMinimumLogLevel(LogLevel.Debug))

And then when you're done debugging, keep the logDebug statements but drop the .pipe(Logger.withMinimumLogLevel(LogLevel.Debug)) and that turns logs off. It's pretty nifty: no longer do I feel like writing nice debug log messages is purely experimental effort, now I can keep them around and turn them on & off via configuration on a per-function basis.


Issues:

Previously:

Placemark Sunday

I got an hour or two of focus to garden Placemark, and got this done:

  • I fixed a pretty annoying bug in which 'transient' updates were added to undo history. In practice, this means stuff like drawing a feature: as you're drawing a rectangle, you click, drag, and resize the rectangle. The rectangle does exist in the application during that time, but those in-between states you don't want to include in undo history. You want one history state before the rectangle was added and one after it was added. I fixed this, which was quick. Thanks to Luke Butler of Matrado for the assist!
  • The 'lasso' selection tool is now powered by Mapbox GL instead of Deck.gl. I switched to Deck.gl for some parts of Placemark as an optimization and mostly regretted it: it's a different rendering paradigm and has been pretty unstable.
  • It's now on React 19. This was not very fun: you upgrade one thing and you have to upgrade a bunch of other stuff. And React 19 hasn't really done anything for me recently: I like that refs are less special, but other than that most of the new stuff in React 19 is about server rendering, and that's irrelevant to Placemark. Anyway, this meant a lot of upgrades. It's more "modern." This also triggered an update to geojson-rewind and polyline.
  • I implemented support for GeoSPARQL-style WKT in betterknown and now Placemark has that version, so you can import that kind of data if you want to.

I've been thinking about Placemark in the following ways, priority-wise:

  1. Simplifying it to make things more maintainable.
  2. Maybe introducing new 'basic' features like sharing maps again, but architecting them in a way that lets it still operate at near-zero cost to me.
  3. Eventually thinking about modularity. But this would require a lot of work and concentration, and is dangerously like an idea that sounds good in theory but is not good in practice.

My previous Placemark updates: switching to Vite, cleanup.

Archive

Effect notes: caching Hallucination city LLMs pivot to the aesthetics of thinking Fewer people should run marathons What if people don't want to create things First-run with agent skills from Anthropic Pictures are famous for their humanness, and not for their pictureness Revisiting "Rust is a hard way to make a web API" On Interrobang with Dave DeGraw Why D3('s DOM methods) are so verbose (another angle) Monad annoyance Using super Personal canon Observable Notebooks 2.0 Placemark updates: Vite! The confident mind works for running Onyx Boox Go 7 as Instapaper single-tasker Dependency thoughts Placemark updates Reverse timers The election Putting an untrusted layer of chatbot AI between you and the internet is an obvious disaster waiting to happen NYC Primaries June 24 - don't rank Cuomo, vote for everyone else Blog micro-optimization LeaderKey.app is a very good launcher Everyone is new here ThinkUp Tribute Markwhen and Meridiem are great Vitest with async fixtures and it.for/it.each End of Twitter Fastify decorateReply types All hat, no cowboy Building an NPM module Talking about Placemark on the Software Sessions Podcast The web is already multiplayer An ast-grep rule requiring you to await vitest expectations An ast-grep rule to ban bad HTML nesting in React TIL with neverthrow Surprising embeddings TIL: Vitest assert There should be a neovim business Mimestream How is Filecoin supposed to work? Limits Companies produce trash / people want trash Maximization and buying stuff Travel internet notes I want brands On not using copilot A warning about tiktoken, BPE, and OpenAI models TIL: Be careful with Postgres cascades Is there really a way to push back on the complexity of the web? Searching for the perfect neovim setup Thoughts on Arc Knip: good software for cleaning up TypeScript tech debt Reddit is my Web Components reference point If you use a domain for something popular, it will get squatted if you don't renew it Bikeshare data notes Wanting to build a trip planner like Embark The unspoken rules of React hooks The good NYC cycling paths Crypto's missing plateau of productivity Syncing iTerm light and dark mode with neovim What are taxes for Running a quick Linux container Smart, kind, and eventual Linting a whole project in neovim, more advanced search with telescope Blogging under AI TIL about requestSubmit TIL about TypeScript and TSX Would LLMs democratizing coding be a pyrrhic victory? Bookish is no longer Searching for a browser Read your lockfiles APIs and applications How I use Obsidian Remix fetcher and action gripes Some new Browser APIs Previous and next links for my daily note in Obsidian Where are the public geospatial companies? Car privacy Migrating a Remix site to Vite Charitable trusts A day using zed Takeaway from using CO₂ monitors: run the exhaust fan Hawbuck wallets Notes on using Linear React is old Hooking up search results from Astro Starlight in other sites The S&P 500 is largely a historical artifact Web pages and video games About Placemark.io Remix notes Running motivation hacks Incentives Code-folding JSX elements in CodeMirror CSS Roundup Using Just Headlamps are better flashlights Don't use marked On Web Components Replay.web is cool What editors do things use? How to set headers on objects in R2 using rclone Chrome Devtools protip: Emulate a focused page codemirror-continue How could you make a scalable online geospatial editor? I wish there was a better default for database IDs Increasingly miffed about the state of React releases Luxury of simplicity How are we supposed to do tooltips now? The module pattern really isn't needed anymore patch-package can bail you out of some bad situations SaaS exits You can finally use :has() in most places Thoughts on storing stuff in databases awesome-codemirror How I write and publish the microblog Hiding Peloton and Zwift workouts on Strava Get the text of an uploaded file in Remix A shortcut for bash using tt Make a ViewPlugin configurable in CodeMirror