@xgranade@wandering.shop cover
@xgranade@wandering.shop avatar

xgranade

@[email protected]

Sometimes I write intimate eschatologies or words about technology and math. Sometimes I make things by burning them with light or squeezing them through a small, hot tube. Sometimes I push water with a stick while sitting in a tiny boat.

This profile is from a federated server and may be incomplete. View on remote instance

@xgranade@wandering.shop avatar xgranade , to random

GOD FUCKING DAMN IT

I misclicked slightly while browsing GitHub and hit the Copilot button by accident. Now apparently I'm on the free plan for it, am already getting spam e-mail about Copilot, and who knows what in the fuck else.

Fucking dark patterns.

@xgranade@wandering.shop avatar xgranade , to random

After yesterday, I'm more convinced than ever that the "T" in TOML is an example of current governance issues, not only an unchallenged and unfortunate historical artifact. Given that, I want the fork of TOML that I opened a couple days ago to work.

In part, that's very selfish of me.

Arcalibre needs a config file format that meets some very specific requirements. While there's better formats than TOML, like KDL, TOML is the best fit out there for those specific requirements.

@jamie@zomglol.wtf avatar jamie , to random

If you use AI-generated code, you currently cannot claim copyright on it in the US. If you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on the entire codebase.

This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain.

Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf

Excert from the linked document: Three copyright registration denials highlighted by the Copyright Office illustrate that, in general, the office will not find human authorship where an AI program generates works in response to user prompts: 1. Zarya of the Dawn: A February 2023 decision that AI-generated illustrations for a graphic novel were not copyrightable, although the human-authored text of the novel and overall selection and arrangement of the images and text in the novel could be copyrighted. 2. Théâtre D’opéra Spatial: A September 2023 decision that an artwork generated by AI and then modified by the applicant could not be copyrighted, since the applicant failed to identify and disclaim the AI-generated portions of the work as required by the AI Guidance. 3. SURYAST: A December 2023 decision that an artwork generated by an AI system combining a “base image” (an original photo taken by the applicant) and a “style image” the applicant selected (Vincent van Gogh’s The Starry Night) could not be copyrighted, since the AI system was “responsible for determining how to interpolate [i.e., combine] the base and style images.”

ALT
xgranade ,
@xgranade@wandering.shop avatar

@SnoopJ @jamie @aeva That was my read as well. IANAL, but my lay understanding was that even if the courts eventually don't act favorably towards an argument, that it exists and has precedent is enough to create legal risk?

@xgranade@wandering.shop avatar xgranade , to random

Mentor chat on ffxiv tonight was all about installing Linux to get better frame rates and to get rid of Copilot.

(And no, I neither started nor participated in said chat, I was busy with a duty. Just kind of amazing to see that unfold.)

xgranade OP ,
@xgranade@wandering.shop avatar

@stopthatgirl7 I've just never seen median gamers this on board with Linux. It's kind of unbelievable, and yet.

@xgranade@wandering.shop avatar xgranade , to random

Hot take: looking for a single silver-bullet Discord replacement is solving the wrong problem. Corporate power has pushed us towards everything-apps, but it's OK for the tool you use to communicate with other users of an open source project to look different from the tool you use to text your spouse and the tool you use to run voice chats with your gaming guilds.

xgranade OP ,
@xgranade@wandering.shop avatar

That inevitably hits pressure from folks who are, rightly or wrongly, averse to having more applications and services in their world. If you set up a family Discord "server," that may not require them to create a new account.

But I think there's ways to solve that without concentrating corporate power, and that those ways look more like federation than anything else. E.g. if your project forums run on Lemmy, folks can use their Lemmy accounts from other servers.

@thephd@pony.social avatar thephd , to random

Can't wait for the ridiculous discussion this one's going to generate.

https://thephd.dev/_vendor/future_cxx/papers/C%20-%20Type%20Punning%20is%20Real.html

xgranade ,
@xgranade@wandering.shop avatar

@thephd I will never accuse you of taking the coward's path.

@xgranade@wandering.shop avatar xgranade , to random

I think the other piece of this that comes to mind for me is that, by and large, software developers as a culture lack class consciousness.

If you're pulling down a mid six-figure tech salary, you're rich and you have more in common with someone in homelessness levels of crushing poverty than you do Jeff Bezos. You're the kind of rich that can own a house, not the kind that national governments consider to be too big to fail.

https://hachyderm.io/@vie/116026351334832806

@xgranade@wandering.shop avatar xgranade , to random

OK, this might work.

>>> import icy_you  
>>> icy_you.Locale("en-us").id.language  
'en'  
xgranade OP ,
@xgranade@wandering.shop avatar

As a test of rustc's dead code elimination running on ICU4X, using a tiny subset of ICU should result in a tiny binary, and while I'm not sure 0.6 MB is exactly tiny, it's a lot smaller than the whole package.

For comparison, ICU4C comes in at 38 MB.

xgranade OP ,
@xgranade@wandering.shop avatar

Some stuff, like converting to title case, is going to be more annoying in ICU4X than ICU4C, but still, it's very possible, I think.

xgranade OP ,
@xgranade@wandering.shop avatar

Alright, here we go! It's very basic right now, and has some things that are downright incorrect (e.g.: title casing incorrectly splits words), but it does show that at least in principle wrapping ICU4X and building with maturin works as a strategy.

https://codeberg.org/rereading/pyreading/pulls/7

@xgranade@wandering.shop avatar xgranade , to random

Since AGI is in the news again for some fucked up dipshit reason, let me again highlight the typical LessWrong-style bait and switch that underlies a huge contingent of AGI discourse: "X is doable in principle" becomes "X is doable by humans" becomes "X is doable by humans on a timeline within human reckoning" becomes "X is doable by a VC with Y money."

@xgranade@wandering.shop avatar xgranade , to random

This is not normal, but normal is how we got here.

xgranade OP ,
@xgranade@wandering.shop avatar

Every single abhorrent thing we see now has been enabled by instiutionalists doing institutionalist things for decades, centuries even. That doesn't mean this shit is normal, not a huge escalation, or in any way OK.

It means that in whatever reconstruction comes next, we must must recognize the normality that brought us here.

@xgranade@wandering.shop avatar xgranade , to random

Yesterday, I livetooted taking the Python Developers Survey. I have a lot of severe concerns with its approach to empiricism with respect to AI boosterism, and also some questions are perplexing more generally.

https://wandering.shop/@xgranade/115969352198983472

@xgranade@wandering.shop avatar xgranade , to random

On one hand, I don't want to create Discourse again. On the other, the Python Developers Survey has been fuming enough I want to livetoot it.

Again, I love the PSF, but I deeply worry about how cozy it is with AI, and wish that more folks in a position of influence to change things at the PSF would spend that power opposing AI. (I know some of y'all already do, and thank you from the bottom of my cold, dead heart.)

@xgranade@wandering.shop avatar xgranade , to random

Purity narratives are bad, but most of the complaints I personally see about "ideological purity" come down to being upset that someone has strongly held principles.

xgranade OP ,
@xgranade@wandering.shop avatar

Taking a stance on the basis of principles is important, whatever one thinks of "purity." We can't always stand on principle, I get it. Practicality has to win sometimes, but still. It's not the having and holding of principles that's the problem.

xgranade OP ,
@xgranade@wandering.shop avatar

I'm subtooting here, namely a take that Wikipedia accepting AI money isn't a problem because that's then more money that Wikipedia has to go do good things that aren't AI, and that objecting to Wikipedia's actions amount to a purity test.

I disagree, though. WP accepted that money in a way that establishes a customer/provider relationship, and principles absolutely have to come into that discussion.

xgranade OP ,
@xgranade@wandering.shop avatar

Is accepting money from customers trying to enclose and take ownership over all human knowledge consistent with the project of making human knowledge freely accessible? Does accepting a customer of that form create a financial incentive to work against Wikipedia's own immediately stated principles?

@xgranade@wandering.shop avatar xgranade , to random

RE: https://mstdn.social/@Grutjes/115907687819628134

If ATProto is as decentralized as Bluesky the company likes to claim, there should be absolutely no need for said company to sponsor ICE by providing them with social media infrastructure. If ICE wants to host a presence on the ATProto part of the social web, then they can stand up their own infrastructure to do so — they definitely have the budget, after all.

@xgranade@wandering.shop avatar xgranade , to random

I find myself increasingly uncharitable towards AI booster takes, approximately five to six years into this hype cycle, with a lot of harms demonstrated and no positives to show for it.

We just don't need to do this.

@xgranade@wandering.shop avatar xgranade , to random

I cannot believe (I can totally believe) I have to say this, but please don't resurrect the open-slopware list.

I firmly believe that a list of encumbered F/OSS projects should exist as a resource to help people avoid AI encumbrances in their own lives. I similarly firmly believe that AI boosters should be held accountable.

Those are not the same goal, though, and conflating them leads to disaster.

@xgranade@wandering.shop avatar xgranade , to random

A fun bit of trivia: to the best of my lay understanding, both Anthropic AI and Bluesky are PBLLCs incorporated in Delaware and under Delaware's legal definitions of the term "public benefit."

xgranade OP ,
@xgranade@wandering.shop avatar

Another several fun bits of trivia: One of the first investors in Anthropic AI was Sam Bankman-Fried, they currently work with Palantir on contracts for the US government, and are rumored to be seeking investment from Qatar — the latter explicitly under the justification that "I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.'

xgranade OP ,
@xgranade@wandering.shop avatar

All of that is documented on Wikipedia:

But my point being that if your argument is that Bluesky is good because the legal infrastructure provided by public benefit LLCs enforces that they won't turn evil, then it does behoove looking at what Anthropic does under the exact same legal structure.

xgranade OP ,
@xgranade@wandering.shop avatar

None of that is guilt by association, none of that is Bluesky is evil because Anthropic is evil because Anthropic works with evil companies.

It's noting that Delaware's laws about public benefit LLCs are totally insufficient to ensure that LLCs actually act in the public benefit. If you want to defend Bluesky as "billionaire-proof" or as being an ethical corporation, you've got a bit more to show than "is registered in Delaware."

xgranade OP ,
@xgranade@wandering.shop avatar

(As a complete side note, it is very irksome that a company that sells automated scabs to Palantir is able to colonize the word "anthropic" to use as its name.)

xgranade OP ,
@xgranade@wandering.shop avatar

An update to the above that saddens me to my core:

Well, fuck. I guess the PSF is willing to compromise their principles for $1.5 million after all.

https://pyfound.blogspot.com/2025/12/anthropic-invests-in-python.html?m=1

@xgranade@wandering.shop avatar xgranade , to random

In my previous career, merely suggesting that a conference or journal adopt a code of conduct would result in my getting screamed at over and over again by people more than a decade my senior in the field.

You know what fucking worked? Using what platform I had to name and shame every single conference in quantum computing that didn't have a code of conduct. I made fucking noise until shit changed.

@xgranade@wandering.shop avatar xgranade , to random

You know, it's weird having watched the meaning of "a few bad apples" completely invert within my lifetime.

@xgranade@wandering.shop avatar xgranade , to random

This is wild to me in juxtaposition to watching all of corporate tech decide that CSAM is OK, apparently, if it's Elon Musk making it.

The cognitive dissonance needed to both believe that what Musk is doing is OK, and that anyone making anything even adjacent to sex needs to be chased off the corporate internet is just... it's a lot.

https://www.404media.co/github-ban-suspension-adult-modding-games-illusion/

@xgranade@wandering.shop avatar xgranade , to random

RE: https://wandering.shop/@xgranade/115853222491234093

A thing I have come to wonder since moving to : why is this one hedge-fund manager so obsessed with kids' genitals, and why is that allowed to dominate all of the state's politics?

@xgranade@wandering.shop avatar xgranade , to random

Perhaps if, instead of an "open source industry," we had an "open source labor movement," we wouldn't be so vulnerable to exploitation by AI vendors.

Perhaps if, we saw LLMs as scams, spam, and scabs, we wouldn't be so vulnerable to exploitation by AI vendors.

Perhaps it's not too late to come around on both of those, save what we can about OSS.

@xgranade@wandering.shop avatar xgranade , to random

Three thoughts about Apple pushing "ChatGPT Health":

• This is why I consider the OS duopoly to be bad, why "just use macOS" is not a sufficient response to Microsoft's rapid descent into slopware.
• When OpenAI implodes, and it will implode in a terrifying way, creditors are gonna come looking for assets. That now includes health data. That's scary as fuck.
• OpenAI is founded on eugenics. Even before they implode, don't give eugenicists your health data.

https://9to5mac.com/2026/01/07/apple-health-integration-launches-in-new-chatgpt-health-feature/

@xgranade@wandering.shop avatar xgranade , to random

A truism I have observed: any debate against an AI booster can be effectively replaced by a series of links to decade-old dril tweets.

xgranade OP ,
@xgranade@wandering.shop avatar

I'm not the only one to have observed this, but the one dril tweet I keep coming back to is the "you don't actually have to hand it to them" line. About 80% of the things I say are more effectively and adroitly said by that one dril tweet.

@xgranade@wandering.shop avatar xgranade , to random

If it's a skill issue, what's the skill?

https://mas.to/@carnage4life/115832534415373032

xgranade OP ,
@xgranade@wandering.shop avatar

Like, this is a serious question. A skill isn't vibes, there has to be something that you can actually learn, complete with that connects that skill to output. Maybe the theory behind that skill isn't as rigorous as a scientific theory, but if it's entirely absent, if you have no way of knowing a real skill apart from vibes? Then yeah, you've just got vibes.

xgranade OP ,
@xgranade@wandering.shop avatar

For all the bloviating from the boosters about how everyone needs to "learn how to use AI," I have yet to see a single coherent explanation of what it is we're supposed to learn.

xgranade OP ,
@xgranade@wandering.shop avatar

Look, lots of folks love to compare AI to quantum computing, and I get it — the hype cycles are hyping. But you can learn how to do quantum computing! And we have actual mathematical theorems that prove that what you learn is correct!

I can, and do, critique the pedagogical methods often used in quantum computing, but that's quite aside from that there's something to engage in pedagogy about, which just isn't true for LLMs.

xgranade OP ,
@xgranade@wandering.shop avatar

The closet you get is there being skills about how to build LLMs, but that's really not the same thing. There's actual theory there, math describing how to build LLMs, but none of that translates to whatever the entire fuck hell "prompt engineering" and "vibe coding" are.

xgranade OP ,
@xgranade@wandering.shop avatar

Two things can be, and in fact are, both true:

• AI is unethical whether it works or not.
• AI doesn't work.

@xgranade@wandering.shop avatar xgranade , to random

Ugh, what an annoying article.

• "Pioneer," starting off with colonialist analogies, no notes.
• Pretending over and over again that there's any path from LLMs to something that can meaningfully said to be "alive," when that is very much so not the case.
• Sucking the oxygen out of the room by posing pretend problems with pretend technologies and calling it "AI safety," suffocating out real ethical problems with AI.

https://www.theguardian.com/technology/2025/dec/30/ai-pull-plug-pioneer-technology-rights

xgranade OP ,
@xgranade@wandering.shop avatar

Fucking Musk, famous for torturing humans as near to him as his own fucking daughter!, is suddenly worried about "torturing" his company's grand theft autocomplete? And we're supposed to take that seriously as a moral argument?

Come the fuck on, people. Also what in the fuck is wrong with journalism that that kind of intellectual faffing about is treated as a serious news topic?

xgranade OP ,
@xgranade@wandering.shop avatar

I'm sorry. I know I shouldn't link to bad takes in order to dunk on them. It's a shitty thing to do and it poisons the narrative by giving that kind of shit more airtime.

But my fucking gods, it's so annoying, it's just breaking my brain. It's like all of society finally decided paid leave was a good thing, but only in the "of your senses" kind of way.

xgranade OP ,
@xgranade@wandering.shop avatar

I've talked to non-techy not-terminally-online extended family members who see this kind of article and think it's real, that there's something meaningful that's being discussed in articles like that. It's propaganda for a weird endtimes/eugenics cult, and is exactly the kind of thing that the "I'm going to turn into the Joker" meme is so good at describing.