Luis Quintanilla Avatar Image

Hi, I'm Luis 👋

Latest updates from across the site

📌Pinned
Blog Post

How do I keep up with AI?

This question comes up a lot in conversations. The short answer? I don’t. There’s just too much happening, too fast, for anyone to stay on top of everything.

While I enjoy sharing links and recommendations, I realized that a blog post might be more helpful. It gives folks a single place they can bookmark, share, and come back to on their own time, rather than having to dig through message threads where things inevitably get lost.

That said, here are some sources I use to try and stay informed:

  • Newsletters are great for curated content. They highlight the top stories and help filter through the noise.
  • Blogs are often the primary sources behind those newsletters. They go deeper and often cover a broader set of topics that might not make it into curated roundups.
  • Podcasts serve a similar role. In some cases, they provide curation like newsletters and deep dives like blogs in others. Best of all, you can tune in while on the go making it a hands-free activity.

For your convenience, if any of the sources (including podcasts) I list below have RSS feeds, I’ve included them in my AI Starter Pack, which you can download and import into your favorite RSS reader (as long as it supports OPML file imports).

If you have some sources to share, send me an e-mail. I'd love to keep adding to this list! If they have a feed I can subscribe to, even better.

Newsletters

Blogs

I pride myself on being able to track down an RSS feed on just about any website, even if it’s buried or not immediately visible. Unfortunately, I haven't found a feed URL for either OpenAI or Anthropic which is annoying.

OpenAI and Anthropic, if you could do everyone a favor and drop a link, that would be great.

UPDATE: Thanks to @m2vh@mastodontech.de for sharing the OpenAI news feed.

I know I could use one of those web-page-to-RSS converters, but I'd much rather have an official link directly from the source.

Podcasts

Subscribing to feeds

Now that I’ve got you here...

Let’s talk about the best way to access all these feeds. My preferred and recommended approach is using a feed reader.

When subscribing to content on the open web, feed readers are your secret weapon.

RSS might seem like it’s dead (it’s not—yet). In fact, it’s the reason you often hear the phrase, “Wherever you get your podcasts.” But RSS goes beyond podcasts. It’s widely supported by blogs, newsletters, and even social platforms like the Fediverse (Mastodon, PeerTube, etc.) and BlueSky. It’s also how I’m able to compile my starter packs.

I've written more about RSS in Rediscovering the RSS Protocol, but the short version is this: when you build on open standards like RSS and OPML, you’re building on freedom. Freedom to use the tools that work best for you. Freedom to own your experience. And freedom to support a healthier, more independent web.

📌Pinned
Blog Post

Starter Packs with OPML and RSS

One of the things I like about Bluesky is the Starter Pack feature.

In a gist, a Starter Pack is a collection of feeds.

Bluesky users can:

  • Create starter packs
  • Share starter packs
  • Subscribe to starter packs

Unfortunately, Starter Packs are limited to Bluesky.

Or are they?

As mentioned, starter packs are a collection of feeds that others can create, share, and subscribe to.

Bluesky supports RSS, which means you could organize the feeds using an OPML file that you can share with others and others can subscribe to. The benefits of this is, you can continue to keep up with activity on Bluesky from the feed reader of your choice without being required to have an account on Bluesky.

More importantly, because RSS and OPML are open standards, you're not limited to building starter packs for Bluesky. You can create, share, and subscribe to starter packs for any platform that supports RSS. That includes blogs, podcasts, forums, YouTube, Mastodon, etc. Manton seems to have something similar in mind as a means of building on open standards that make it easy for Micro.blog to interop with various platforms.

If you're interested in what that might look like in practice, check out my "starter packs" which you can subscribe to using your RSS reader of choice and the provided OPML files.

I'm still working on similar collections for Mastodon and Bluesky but the same concept applies.

Although these are just simple examples, it shows the importance of building on open standards and the open web. Doing so introduces more freedom for creators and communities.

Here are other "starter packs" you might consider subscribing to.

If this is interesting to you, Feedland might be a project worth checking out.

📌Pinned
Note

OPML for website feeds

While thiking about implementing .well-known for RSS feeds on my site, I had another idea. Since that uses OPML anyways, I remembered recently doing something similar for my blogroll.

The concept is the same, except instead of making my blogroll discoverable, I'm doing it for my feeds. At the end of the day, a blogroll is a collection of feeds, so it should just work for my own feeds.

The implementation ended up being:

  1. Create an OPML file for each of the feeds on by website.

     <opml version="2.0">
       <head>
     	<title>Luis Quintanilla Feeds</title>
     	<ownerId>https://www.luisquintanilla.me</ownerId>
       </head>
       <body>
     	<outline title="Blog" text="Blog" type="rss" htmlUrl="/posts/1" xmlUrl="/blog.rss" />
     	<outline title="Microblog" text="Microblog" type="rss" htmlUrl="/feed" xmlUrl="/microblog.rss" />
     	<outline title="Responses" text="Responses" type="rss" htmlUrl="/feed/responses" xmlUrl="/responses.rss" />
     	<outline title="Mastodon" text="Mastodon" type="rss" htmlUrl="/mastodon" xmlUrl="/mastodon.rss" />
     	<outline title="Bluesky" text="Bluesky" type="rss" htmlUrl="/bluesky" xmlUrl="/bluesky.rss" />
     	<outline title="YouTube" text="YouTube" type="rss" htmlUrl="/youtube" xmlUrl="/bluesky.rss" />
       </body>
     </opml>
    
  2. Add a link tag to the head element of my website.

     <link rel="feeds" type="text/xml" title="Luis Quintanilla's Feeds" href="/feed/index.opml">
    
Blog Post

Favorite Super Bowl Commercials 2026

I didn't get a chance to watch the Super Bowl, but earlier today I caught up with the commercials. Here are my favorites.

Instacart

I like Ben Stillers work, but I find the characters he plays in Heavyweights and Dodgeball some of the funniest. That's why I couldn't stop laughing at the Instacart commercial.

Instacart Super Bowl Commercial

State Farm

Similarly, I like the work of Danny McBride and Keegan-Michael Key, so I found the State Farm Commercial hilarious.

Stop Living on a Prayer State Farm Super Bowl Commercial

Squarespace

Emma Stone, IndieWeb spokeperson? I liked Squarespace's message to own your domain, identity, and content on the web. It's supposed to be funny but so true and important.

A Messager From Emma Stone Squarespace Super Bowl Commercial

Pepsi

This was a fun dig at Coca Cola. I'm still team Coca Cola but this was funny.

The Choice Pepsi Super Bowl Commercial

Redfin & Rocket Mortgage

We all need a neighbor.

America Needs A Neighbor Like You Redfin Rocket Mortgage Super Bowl Commercial

Note
Reshare

Orchestrate teams of Claude Code sessions

Agent teams let you coordinate multiple Claude Code instances working together. One session acts as the team lead, coordinating work, assigning tasks, and synthesizing results. Teammates work independently, each in its own context window, and communicate directly with each other.

Unlike subagents, which run within a single session and can only report back to the main agent, you can also interact with individual teammates directly without going through the lead.

Reshare

Building a C compiler with a team of parallel Claudes

...I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.

The compiler is an interesting artifact on its own, but I focus here on what I learned about designing harnesses for long-running autonomous agent teams: how to write tests that keep agents on track without human oversight, how to structure work so multiple agents can make progress in parallel, and where this approach hits its ceiling.

Reshare

Everything in Git: Running a Trading Signal Platform on NixOS

Database configuration, workflow schedules, secrets, log shipping, observability—across every machine. No SSH sessions, no manual steps, no configuration (or documentation!) drift. Each server converges to whatever state is declared in our git repository.

As we're writing this, px dynamics is a few weeks old. We wanna take you through how we set up camp: The infrastructure took less time to build than it would take most startups (us included!) to properly configure an AWS account. We're not migrating from something else or "modernizing legacy systems." We started here.

Interesting writeup. I love NixOS exactly for the reasons highlighted in this post.

Reshare

Claude Opus 4.6

The new Claude Opus 4.6 improves on its predecessor’s coding skills. It plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes. And, in a first for our Opus-class models, Opus 4.6 features a 1M token context window in beta.

Opus 4.6 can also apply its improved abilities to a range of everyday work tasks: running financial analyses, doing research, and using and creating documents, spreadsheets, and presentations. Within Cowork, where Claude can multitask autonomously, Opus 4.6 can put all these skills to work on your behalf.

Reshare

Introducing GPT-5.3-Codex

We’re introducing a new model that unlocks even more of what Codex can do: GPT‑5.3-Codex, the most capable agentic coding model to date. The model advances both the frontier coding performance of GPT‑5.2-Codex and the reasoning and professional knowledge capabilities of GPT‑5.2, together in one model, which is also 25% faster. This enables it to take on long-running tasks that involve research, tool use, and complex execution. Much like a colleague, you can steer and interact with GPT‑5.3-Codex while it’s working, without losing context.

GPT‑5.3‑Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations—our team was blown away by how much Codex was able to accelerate its own development.

With GPT‑5.3-Codex, Codex goes from an agent that can write and review code to an agent that can do nearly anything developers and professionals can do on a computer.

Reshare

BIG MAMA | Flying Lotus

Flying Lotus fans rejoice, today the famed electronic auteur announces BIG MAMA, a brand new EP coming out on 6th of March.

BIG MAMA captures Ellison in a moment of spontaneous, unbridled momentum. Densely packed with disparate sounds, rhythms, and effects, the EP delivers what he describes as “experimental, maximalist, hyperfast, electronic burst of energy”, packing seven dynamic tracks into a single continuous composition in which every bar is unique, containing no loops throughout.

“I wanted it to feel like being shot out of a cannon, just explosive, unpredictable energy,” he explains. “Like a fuckin’ computer gone awry. Like a machine that had just lost its mind.”

New FlyLo album. Can't wait!

Reshare

Voxtral transcribes at the speed of sound

Today, we're releasing Voxtral Transcribe 2, two next-generation speech-to-text models with state-of-the-art transcription quality, diarization, and ultra-low latency. The family includes Voxtral Mini Transcribe V2 for batch transcription and Voxtral Realtime for live applications. Voxtral Realtime is open-weights under the Apache 2.0 license.

Voxtral Mini Transcribe V2: State-of-the-art transcription with speaker diarization, context biasing, and word-level timestamps in 13 languages.

Voxtral Realtime: Purpose-built for live transcription with latency configurable down to sub-200ms, enabling voice agents and real-time applications.

Best-in-class efficiency: Industry-leading accuracy at a fraction of the cost, with Voxtral Mini Transcribe V2 achieving the lowest word error rate, at the lowest price point.

Open weights: Voxtral Realtime ships under Apache 2.0, deployable on edge for privacy-first applications.

Bookmark

Sara Weiss Journeys to the Planet Mars (1903)

A wide gulf exists in our imagination between the Spiritualism of the nineteenth century and the post–WWII era of UFOs and interplanetary visitors. One bears the gothic sheen of the Victorian era, with its séances and table rappings, its belief in a “Summerland” where the dead repose peacefully and happily, ready to reach out to us from beyond the grave and offer us solace and wisdom. The other is a world of futuristic chrome and light, of mysterious objects in the sky, of men from Mars and Venus stepping out of spacecraft to offer us friendship and untold technological advances.

...Sara Weiss’ book Journeys to the Planet Mars, first published in 1903, a strange bridge between these two eras. Weiss, a St. Louis housewife, was in her mid-fifties when she began to have otherworldly visions and gained the attention of the Spiritualist community. A 1901 St. Louis Dispatch article describes how Weiss had a series of visions during a convalescence, when “people in the spirit world” visited her and dictated a treatise on the planet Mars and its fauna. As part of these sessions, her guides helped her draw a series of images that depicted various flowers of Mars, which were reproduced in the Dispatch’s article...

Reply

Share RSS Feed

You can access my site at lqdev.me. If your reader does auto-discovery, it should automagically pick up the feeds. Alternatively, here's a list of my feeds which you can also import using OPML. On the site, I mostly write about open-source, Indieweb, and AI projects I'm tinkering with. You'll also find me sharing articles, videos, and music that interest me.

P.S. I'm slowly working my way through your Phantom Fluency post. Great work with Phantom Obligation. It resonated with me.

Note
Reshare

FOSDEM 2026 - AT: The Billion-Edge Open Social Graph

This talk by Alexander was great. It was effectively a tour of the new docs for AT Protocol which look amazing. Can't wait to dig into them once they're published and learn more about it. I'm already thinking about things I'd like to implement with it. The callback to writing Map Reduce jobs in Hadoop for large scale social media analytics brought back so many memories.

Reply

Musk's Starlink updates privacy policy to allow consumer data to train AI

Starlink updated its Global Privacy Policy on January 15, according to the Starlink website. The policy includes new details stating that unless a user opts out, Starlink data may be used “to train our machine learning or artificial intelligence models” and could be shared with the company’s service providers and “third-party collaborators,” without providing further details.

🤮

Note

RSVPs working

Nice! I got RSVP posts working on my site. This works with Webmentions / IndieWeb

Image Image

as well as ActivityPub

   {
      "@context": "https://www.w3.org/ns/activitystreams",
      "id": "https://lqdev.me/api/activitypub/activities/17ca544b8fb87ea6223e2f5b0cad040e",
      "type": "TentativeAccept",
      "actor": "https://lqdev.me/api/activitypub/actor",
      "published": "2026-01-31T23:43:00-05:00",
      "to": [
        "https://www.w3.org/ns/activitystreams#Public"
      ],
      "cc": [
        "https://lqdev.me/api/activitypub/followers"
      ],
      "object": "https://events.indieweb.org/2026/02/homebrew-website-club-pacific-MyM39P5egEsp",
      "inReplyTo": "https://events.indieweb.org/2026/02/homebrew-website-club-pacific-MyM39P5egEsp"
    }
Blog Post

Thoughts on the Social Web from FOSDEM 2026

I had the opportunity to attend FOSDEM 2026 virtually, and I spent almost all of my time in the Social Web track.

A few themes kept coming up across talks. Some were explicit, some were between the lines. Either way, they prompted a bunch of thoughts I wanted to capture.

DISCLAIMER: AI was used to help me organize and improve the flow of this post. Ideas and thoughts expressed are my own.

Hosting is hard

In Building a sustainable Italian Fediverse: overcoming technical, adoption and moderation challenges, there was a moment (not the main focus of the talk) where hosting came up in a way that really stuck with me. I’m paraphrasing, so apologies if I misrepresent anything, but the gist was:

  • Hosting Mastodon is hard, so we simplify with hosting services like Masto.Host
  • Hosting PixelFed and PeerTube is easier thanks to appliances like YunoHost

Based on my own experience, that rings true, with some nuance.

Getting Mastodon running isn’t actually the hardest part. The self-hosting docs are good enough in my opinion, and that’s how I originally stood up my instance at toot.lqdev.tech. I even maintain guides for cleanup and upgrades that largely mirror the official Mastodon documentation and release notes.

The harder part is everything after provisioning.

Mastodon (especially with federation enabled) can be resource-intensive, and that cost shows up fast even on a single-user instance. If I’m not staying on top of maintenance, disk fills up. Every few weeks, my instance will go down because I’ve run out of storage. Add database migrations, which can be error-prone, and you end up with a setup that’s straightforward to launch but expensive to operate. You pay in money for a big enough server, and you pay in time for ongoing maintenace.

I still want to participate in the Fediverse, but I don’t want to keep paying the maintenance tax for Mastodon. That’s one of the reasons I implemented ActivityPub on my static site instead.

On the PixelFed side, I did try to self-host it once, and I couldn’t get it working cleanly from scratch. Some of that is on me (I’m not familiar with PHP), but either way, YunoHost was a lifesaver. With YunoHost, I had PixelFed up and running quickly, and what that ecosystem provides is genuinely impressive.

That said, I also learned the “operations” lesson there too. During an upgrade, something went wrong with the database, it got corrupted, and I couldn’t restore from backup. I ultimately took the instance down. I’m willing to attribute that to user error, but it still reinforces the bigger point.

The promise of federation and decentralization is that you can stand up your own node for yourself, your family, a school, a company, a city, even a government. In practice, that’s still too hard for most people unless they use appliances like YunoHost or managed hosting like Masto.Host.

And yes, those options mean giving up some control. But even with that tradeoff, I’d argue it’s still better than centralized platforms.

As someone fairly technical and a little extreme about owning the whole stack (I implemented my own static site generator, Webmentions service, and now ActivityPub), I still find this hard. I can’t imagine how unapproachable it feels if you’re not technical. I just wish it were simpler and more cost-effective to run these services without needing either deep system administration knowledge or active ongoing maintenance.

One identity, many post types

In the talk, How to level up the Fediverse, Christine and Jessica talked about ActivityPub implementations and touched on something that really resonated with me.

The idea (again, paraphrasing) was that splitting content types by app (video goes to PeerTube, images go to PixelFed, microblogging goes to Mastodon) might not be the right long-term model. Instead, they suggested something closer to one place to publish and follow people, with rich post types handled in one identity and one experience.

That immediately made me think about Tumblr.

When I first heard Tumblr was planning to implement ActivityPub, I was excited because Tumblr is already “that kind of app.” You can publish videos, photos, polls, longer posts, and everything in between, all in one place. There was also talk about moving Tumblr to WordPress, which (in theory) could make ActivityPub integration even more powerful. But as of now, Tumblr’s ActivityPub work seems to be paused.

The more I think about it, the more this model makes sense, especially because the most important part isn’t the “single app.” It’s the single identity.

You should have one account where your content originates. Then people can consume it from different experiences. Maybe that is a video-focused client, maybe it is an image-first view, maybe it is a Mastodon-like timeline. The key is that you do not need separate accounts everywhere.

That’s essentially how I think about my website.

My site is my digital home and my identity. I post different content types which align with IndieWeb post types:

  • Articles
  • Notes
  • Responses (reposts, replies, likes)
  • Bookmarks
  • Media (photos and videos)
  • RSVPs

People can follow via RSS. And more recently, I implemented my own ActivityPub support so my posts generate native ActivityPub activities. That means Mastodon and other clients can follow and interact with my site directly.

What I like about this is that it decouples publishing from consumption.

I choose where I publish (my site). Others choose how they consume (their client). The protocols handle the translation.

The web is already social and decentralized

In Social Web conversations, sometimes the tone implies the "social web" is separate from "the web".

I don't really buy that.

The web is social because people are on it. People use it to learn, create, find community, do commerce, argue, collaborate, share memes, and everything else. The web is also decentralized by default. That's the baseline architecture.

Dave Winer recently wrote about software being "of the web". Software that's built to share data, accept input, produce output, and let users move their data. Not locked into silos.

This is why I'm so bullish on a different architectural approach: start as a website, add social capabilities as components.

People are already using WordPress, Ghost, and Micro.blog to build sites. With an ActivityPub plugin, your existing web presence becomes followable and interactive in the Fediverse. The site remains a site. It just gets socially interoperable.

Bridgy Fed reinforces this. It takes what already exists on the web and helps it participate in social protocols, without forcing you to rebuild as a native social app first.

That's also my own setup. My website worked as a publishing platform and people could follow via RSS. When I implemented ActivityPub, it became progressively enhanced. Same posts, new social vocabulary. I didn't have to abandon my site. I just made it speak the social language.

Modular and extensible feels like the right direction

This is the architectural vision I took away from Bonfire: Building Modular, Consentful, and Federated Social Networks.

The "opt-in pieces" approach is about choosing which parts you want, evolving your experience based on what you enable. It echoes small pieces loosely joined. It's a practical model for a federated future:

  • Start with the basic web
  • Add social capabilities as components
  • Get progressively more powerful as you opt in

Your site still works normally. When you speak the lingua franca of protocols like ActivityPub, you can express social intent in a way other systems understand.

So it's not "the web vs the social web." It's the web, with richer native social vocabulary.

Conclusion

This probably reads like I’m nitpicking, but I’m genuinely bullish on federated and decentralized networks. That’s why I’m still participating.

What stood out to me at FOSDEM this year is momentum. Last year, the Social Web track was a half day. This year, it expanded to a full day. That signals to me that there are a lot of smart, passionate people working across protocol design, UX, moderation, policy, community, activism, and implementation, trying to build real alternatives to entrenched silos.

And the plurality of implementations is a strength. It encourages exploration, competition, and innovation.

My hope is that the “end state” isn’t a separate social web you have to join. It’s a web that continues to work as expected, but gets progressively enhanced when you opt into interoperable social protocols.

Ultimately, there isn’t “the web” and “the social web.” There's just the web, and social vocabularies that participants can adopt without thinking about it.

Reshare

How AI assistance impacts the formation of coding skills

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.

Note

FOSDEM 2026 starts tomorrow

I almost forgot tomorrow is the start of FOSDEM.

There's so many good sessions. I already added talks from the following tracks to my calendar:

I know I won't get to all of them, but I'm planning on being at Maho's Decentralised Badges with BadgeFed: Implementing ActivityPub-based Credentials for Non Profits talk.

POSSE content with Drupal using Nostr by Sebastian Hagens looks very interesting as well. Mainly because I'm thinking of doing something similar for my site.

Reshare

Open Coding Agents: Fast, accessible coding agents that adapt to any repo

Today we’re releasing not just a collection of strong open coding models, but a training method that makes building your own coding agent for any codebase – for example, your personal codebase or an internal codebase at your organization – remarkably accessible for tasks including code generation, code review, debugging, maintenance, and code explanation.

Reshare

Phantom Obligation

Why do RSS readers look like email clients?

That is a REALLY GOOD question.

The screen says you have 847 unread articles. The screen, through its visual language, implies that this is a problem to be solved, a debt to be paid down.

But what's actually true? Some people wrote things. They put those things on the internet. You expressed interest in being notified. That's it. No one is waiting. No one will know or care if you declare bankruptcy and mark all as read.

The guilt is a ghost. It haunts a house that no one lives in.

The guilt is real. For me it's FOMO. The feeling like that next unread article is going to be the one. The one for what? I don't even know, but I'm irrationally certain of it.

Podcasts borrowed the queue from music players. But nobody ever felt guilty about unplayed albums. "I haven't listened to all my records" isn't a confession. Podcast apps added unplayed counts, progress bars, completion stats. Your listening became a task list.

Image

Send help

An interface that shows you an unread count is making an argument: that reading is something to be counted, that progress is something to be measured, that your relationship to this content is one of obligation.

We should be more conscious of which arguments we're immersing ourselves in, hour after hour, day after day.

Blog Post

Generating Book Covers Using AI

Project Gutenberg is such a gem. There's so many books an entire lifetime still wouldn't be enough to read them all. The past few days, I've been downloading books from there and transfering them to my E-Reader.

One of the things I've noticed is that some books don't have cover images. Or better yet, they do but they're relatively simple.

Take for example Missing Link by Frank Herbert. This is what the current book cover looks like:

Missing Link Project Gutenberg Simple Book Cover

While this is fine, I wanted something more eye-catching to display on my E-Reader. So I decided to turn to AI for help.

Using ChatGPT's image generation capabilities, I provided the title, author, description which I took from the book's page on Project Gutenberg and the following prompt:

Can you generate a book cover image that includes the title, author name, and Project Gutenberg, and visually captures the themes of the story in a way that draws readers in?

This was the result:

Missing Link Project Gutenberg AI Book Cover

I thought it was good, but the style was too modern. So I asked ChatGPT to refine the prompt and optimize it for image generation while taking into account the design aesthetic of the time (1950s).

This is what the optimized prompt looks like

Generate a 1950s pulp science-fiction book cover for “Missing Link” by Frank Herbert, in the style of classic Astounding Science Fiction magazine covers. The illustration should be hand-painted and illustrative rather than photorealistic, with bold, slightly exaggerated forms and dramatic lighting.

The scene depicts first contact on an alien jungle world: a human explorer in a mid-century space suit negotiating tensely with a reptilian alien holding advanced human technology salvaged from a crashed ship. The jungle should feel dense and exotic, with oversized alien foliage and a wrecked spacecraft partially visible.

Use a limited, high-contrast color palette typical of 1950s pulp covers (vivid greens, oranges, yellows, deep blues). Typography should be bold, blocky, and vintage, with the title prominently at the top, the author’s name below it, and Project Gutenberg at the bottom.

The overall tone should feel dramatic, mysterious, and slightly ominous, evoking Cold War–era anxieties, exploration, and the unknown — clearly recognizable as a mid-20th-century science fiction paperback cover.

The generated image from that prompt looks like this:

Missing Link Project Gutenberg AI Book Cover

Much better. I'm sure there's other ways I can keep refining the prompt and image endlessly but this is good enough for now.

I have many other books that currently have the simple cover. Given I have several of them and I've validated the workflow, I think for the other books I'll have Copilot write a script to automate the image generation. Also, I don't know what Project Gutenberg's stance is on the use of AI, but if they're open to it, I'd be happy to donate / contribute the generated book covers.

If I end up getting to the image generation scripts, I'll post more about that. Stay tuned!

Blog Post

The Dumbphone Chronicles: Introduction

I recently watched Henry from Techlore's response to the Wired article, Dumphone Owners Have Lost Their Minds.

At 35 minutes, the response is thorough and nuanced.

That video prompted me to write about my own experience using a dumbphone as my daily driver from about 2021-2024.

Originally, I had intended for this to be a single blog post. Then, I started writing. As I wrote out each of the points and sections I planned for the individual blog post, it primed related thoughts. So I kept writing. For some sections, the sub-sections alone could be their own post. Therefore, rather than trying to condense that three to four year journey into a single blog post or torture you with a wall of text, I decided to turn it into a series.

My plan is for this post to serve as the introduction to the series and in subsequent posts, I plan to explore different parts of the journey. I want to be able to give the ideas and each of those sections room to breathe and stand on their own.

Some of the things I plan to talk about during the series include:

  • How the journey started
  • What my setup looked like
  • What I enjoyed about using a dumbphone
  • What was hard to do with a dumbphone
  • Working around the hard parts
  • Pleasant surprises / unintended consequences of using a dumbphone
  • The role of community
  • My journey back to a smartphone

As it happened when I originally started writing this post, I'm sure more topics and ideas will surface. This series is not meant to be followed in chronological order as I myself don't remember when certain things happened and can only remember general timelines.

My goal for this series is to reflect on my unique perspective and personal experience. In doing so, hopefully I provide a glimpse into how I discovered alternative ways of doing things without a smartphone. Also, I think it's a nice way for me to pad the number of posts on my site. I haven't done a lot of long-form writing on the website in some time. It's something I've been wanting to do for a while and this is a great opportunity.

Despite “dumbphone” being in the name, this series is about much more than a phone. The device itself faded into the background fairly quickly. Removing the smartphone introduced enough friction to slow me down and make me question assumptions I hadn’t noticed before. It created space to experiment, to question defaults, and to discover different ways of engaging with both the digital and physical world. The dumbphone was simply the constraint that made those explorations possible. This series is less about a device and more about the unexpected paths that opened up once I gave myself that space.

In the posts that follow, I’ll explore how this experience shaped my thinking and perspectives along the way. I’ll see you in the next post.

Reshare

Kimi K2.5: Visual Agentic Intelligence

Today, we are introducing Kimi K2.5, the most powerful open-source model to date.

Kimi K2.5 builds on Kimi K2 with continued pretraining over approximately 15T mixed visual and text tokens. Built as a native multimodal model, K2.5 delivers state-of-the-art coding and vision capabilities and a self-directed agent swarm paradigm.

For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls. Compared with a single-agent setup, this reduces execution time by up to 4.5x. The agent swarm is automatically created and orchestrated by Kimi K2.5 without any predefined subagents or workflow.

Blog Post

Webmentions are back thanks to GitHub Copilot

Webmentions are working again on this site.

Earlier today I deployed a new version of my webmention service to Azure and brought my endpoint back online.

A few months ago, I moved my website from Azure Storage static website hosting to Azure Static Web Apps for cost reasons. As part of that migration, I also removed Azure Front Door which mapped my custom domain and provided certificates for my webmention endpoint. As expected, this broke my webmention endpoint and since it wasn't high on my priority list, it just stayed broken for months.

Yesterday I decided to bring the webmentions endpoint back online. In part because I want to receive webmentions again, but also because my website now implements parts of the ActivityPub protocol. That means that my website is effectively a node in the Fediverse. At the moment it doesn't handle activities other than Follow requests and delivers my posts to my followers. Ultimately, I'd like to do something similar to BridgyFed. Basically take any interactions on my Fediverse posts (Boost, Like, Bookmark, Reply) and map them to webmentions so I'm notified when someone engages with content on the various Fediverse platforms like Mastodon. So in order to do that, I needed my webmentions endpoint back online.

In this post, I'll give a brief overview of my solution, what's new, and talk about how I got my endpoint working again with the help of AI.

Solution overview

My webmention endpoint is hosted on Azure.

More specifically, it's an Azure Functions project made up of two functions:

  • Receive Webmentions - When someone sends a webmention to the endpoint, it uses the WebmentionFs library to perform validation per the Webmentions W3C spec and stores the webmention in Azure Table Storage. The endpoint accepts like, repost, reply, and bookmark webmentions.
  • Webmention to RSS - Generates an RSS file every day at 3 AM UTC with the latest webmentions stored in Azure Table Storage. This one's not required per the Webmention spec, but it's the solution I came up with for consuming and getting notifications about the webmentions I receive.

You can find the solution in the lqdev/WebmentionService repo on GitHub.

For more details about the solution, you can read my post Accepting Webmentions using F#, Azure Functions, and RSS.

Although there's been some changes which I'll talk more about below, the solution has remained largely unchanged.

What's new

It had been some time since I last updated the solution so I took some time to upgrade it. Here are some of the highlights:

  • Upgrade to .NET 10 - Since I originally deployed my endpoint back in 2022, it remained largely untouched. In fact, I don't think I ever updated or redeployed it. Even if I did, it happened at most once or twice in the past 3 years. I know my service kept working though because I had webmentions through August / September 2025 which I find amazing. The original solution was running .NET 6 which is no longer supported. My solution is running .NET 10 now.
  • Standard dependency injection - I don't remember if I was running the isolated worker model or in-process for my original solution. Even if I was running isolated worker, I wasn't using standard dependency injection which is a new change. Now my solution looks like any other ASP.NET Web API which is nice.
  • Flex Consumption Plan - My original solution was using the Consumption Plan. Azure Functions plans to retire that in 2028, so I was proactive about it and just migrated to the Flex Consumption Plan as recommended. Since my service is low-traffic, I should be able to use the Free Tier for a total cost of $0. Also worth mentioning, a more immediate reason for migrating to the Flex Consumption Plan is that .NET 10 on Linux is only supported on that plan. So by upgrading to .NET 10, I was also forced to switch to Flex Consumption Plan.
  • Custom domain and Azure certificate - While not exactly new because I was using custom domains before, I'm no longer using Azure Front Door. Azure Front Door was good but way too much for what I needed both in terms of features and cost. Now I'm just pointing my DNS configuration to my Azure Functions endpoint and I'm using a certificate from Azure.
  • GitHub Actions CI/CD Deployment Workflow - Deployment was manual. I took the opportunity to add a new GitHub Actions workflow to deploy the latest versions of my solution to Azure whenever there's a merge into the main branch.
  • Documentation - I made the mistake the first time of not documenting my process and solution, other than the blog post. As a result, I wasn't sure where to get started to maintain it. That's why I rarely touched it in 3+ years. This time, I made sure to add extensive documentation and a workflow for project management, architectural decision records, changelogs, etc.
  • AGENTS.md / Copilot Instructions - As I'll mention in the next section, this upgrade / migration was entirely done using AI. To help guide the AI coding assistants, I added AGENTS.md and Copilot Instructions.

For a full list of changes, check out the changelog.

AI Coding Workflow

To perform the migration, I wrote none of the code, provisioned any of the Azure resources, or performed the deployments. Everything was done by AI. I used:

Planning

My original goal for this project was simple.

  1. Upgrade to .NET 10
  2. Deploy the upgraded version of my app to Azure

Those instructions plus additional instructions to understand the solution paired with context like the blog post detailing the solution were effectively the prompt I provided GitHub Copilot.

For the planning phase, I used Claude Opus 4.5 as the model and also gave it access to the Azure CLI and the Microsoft Learn and Perplexity MCP servers.

With that, GitHub Copilot set off to:

  1. Use the blog post to understand motivations and original technical design decisions.
  2. Inspect the code to understand the structure of the project.
  3. Execute Azure CLI commands to get Azure resource and deployment details.
  4. Look up information using Perplexity and Microsoft Learn MCP servers about what needed to be done to upgrade my project to .NET 10 and how to do it.

The result was two artifacts:

These documents served as the guide for the rest of the migration. As deviations or unexpected roadblocks came up, these documents were updated to reflect those changes.

Inner Loop

Once I had the ADR and project plans in place, it was time to start the migration.

This was entirely done by GitHub Copilot. Since the plans were effectively implementation ready, I switched from Claude Opus 4.5 to Claude Sonnet 4.5. This kept my cost down while still performing effectively on the coding tasks. Throughout the implementation, if it needed to look up information, it used the Perplexity and Microsoft Learn MCP servers.

As mentioned earlier, if there were roadblocks or deviations from the original plan, once it validated and arrived at a solution, I asked it to update the plan and sometimes the ADR to reflect these changes.

Outer Loop

Once code was complete, version control, resource provisioning, deployment, and monitoring were done primarily by having GitHub Copilot execute GitHub CLI and Azure CLI commands. Again, if it needed additional information, it used Perplexity or Microsoft Learn as references.

As expected, there was some iteration after deployment. Using the GitHub CLI I was able to diagnose and debug GitHub Actions issues and using the Azure CLI, I was able to diagnose issues in the deployed solution.

Once issues were diagnosed, it was back to either the planning or coding phase until we got to the solution working successfully.

Conclusion

Overall, I'm happy to not only have my webmentions endpoint back online but it's back better than ever with all the upgrades. More importantly, it establishes a foundation for me to continue to build upon as I build integrations with my ActivityPub implementation. I know at some point I'll want to add AT Protocol and Nostr. Nostr particularly feels like a relatively easy addition now that I've established a pattern with my ActivityPub implementation so I just know I'll end up doing it at some point.

If you're looking to build your own webmentions endpoint, hopefully this can serve as a reference implementation. If you find it useful, send me a webmention and let me know!

Note

Week of January 25, 2026 - Post Summary

Bookmark

The RSS Review

The RSS Review is just a little corner of the web trying to raise awareness about RSS and RSS feeds. By using RSS we, the people of the internet, do not have to wait for news, blog posts, podcasts, and articles to surface to us from search engines and social media. With RSS, you do not have to subscribe to news via email newsletters that will clutter your inbox.

Better still, by using a RSS feed reader you can get all of your preferred content from a single location. Many feed readers will let you skim content, save to read later, and provide methods to share content quickly (if you choose to do so).

RSS is a way to begin taking the web back from big advertising who only wants to serve up ads based on your personal data. It is a way to get the content you want to read, when you want to read it.

Reshare

Rent-Only Copyright Culture Makes Us All Worse Off

In the Netflix/Spotify/Amazon era, many of us access copyrighted works purely in digital form – and that means we rarely have the chance to buy them. Instead, we are stuck renting them, subject to all kinds of terms and conditions.

Our access to culture, from hit songs to obscure indie films, are mediated by the whims of major corporations. With physical media the first sale principle built bustling second hand markets, community swaps, and libraries—places where culture can be shared and celebrated, while making it more affordable for everyone.

And while these new subscription or rental services have an appealing upfront cost, it comes with a lot more precarity. If you love rewatching a show, you may be chasing it between services or find it is suddenly unavailable on any platform.

Bookmark

JordanMarr/Agent.NET: A composable AI agent framework for .NET.

Agent.NET is an F#‑native authoring layer built on top of the Microsoft Agent Framework. MAF provides a powerful, low‑level foundation for building agent systems — durable state, orchestration primitives, tool execution, and a flexible runtime model.

Agent.NET builds on those capabilities with a higher‑level, ergonomic workflow DSL designed for clarity, composability, and developer experience. Where MAF offers the essential building blocks, Agent.NET provides the expressive authoring model that makes agent workflows feel natural to write and reason about.

This is a cool project! Workflows as they were intended. The composition here looks more natural.

Reshare

The Computational Web and the Old AI Switcharoo

We are halfway into Web 3.0. The Computational Web has tossed a lot of hefty promises into that Trojan Horse we call AI—ending world hunger, poverty, and global warming just to name a few. But this is for all the marbles. Promises of utopia are not enough. They must scared the shit out of us, too, by implying that AI in the wrong hands can bring about a literal apocalypse.

So, how will Web 3.0 end?

On New Year's, I made a couple of silly tech predictions for 2026 (because it's fun guessing what our tech overlords will do to become a literal Prometheus).  The first prediction is innocuous enough. I think personal website URLs will become a status symbol on social media bios for mainstream content creators. Linktrees are out. Funky blogs with GIFs and neon typography are in. God, how I hope this one happens. Not even for nostalgia. In the hyper-scaled, for-profit web, personal websites are an act of defiance. It's subversive. It's punk.

My second 2026 prediction, something, for the record, I do not hope happens, is an attack on local computing.

A future with more ownership of our digital spaces and devices is one I can get behind.

Reshare

Turrialba Volcano and the Infrastructure of Everything, Part I

Solving for cancer, defending human rights, and resisting authoritarian violence are urgent human imperatives. They all sit downstream of biodiversity. Without living systems that grow food, regulate climate, clean water, and absorb shocks, moral victories do not endure. Rights, laws, and institutions depend on biological stability that rarely receives attention.

That urgency is easy to miss because attention is constantly being redirected. Each year brings a new existential panic. This year, it’s artificial intelligence. Last year was social media. The year before that, culture wars. Remember Y2K?