

deleted by creator


deleted by creator


The CLA can never override the code license. It handles the transition of your code into their code, and what they can do with it. But once it’s published as AGPL, you or anyone else can fork it and work with it as AGPL anyway. The CLA can allow them to change the license to something different. But the AGPL published code remains published and usable under AGPL.
I’m usually fine with contributing under CLA. A CLA often make sense. Because the alternative is a hassle and lock-in to current constructs. Which can have its own set of disadvantages.
A FOSS license and CLA combination can offer reasonable good to both parties: You can be sure your contribution is published as FOSS, and they know they can continue to maintain the project with some autonomy and choices. (Choices can be better or worse for others, of course.)


That
/unsaved/{id}path with a unique server-assigned identifier means your diff content was transmitted to and stored on their servers.
Not necessarily. URLs can be changed client-side, within the browser, through JavaScript. The fact that the URL changed to unsaved alone is no proof. It could very well be browser-local, labeled unsaved and held in session store for example, ready to be saved.
With the other indications, you can of course make a guess and/or consider it a strong indication.
It should be pretty obvious/observable when observing interaction and network requests within the browser. A network request with the content as body would be much better proof.


It’s in the name after all. 1 regex, 0 other stuff, and 1 com.
I’m a bit confused by them publishing their personal essays on their htmx project page. This essay certainly doesn’t have anything to do with htmx directly. Either way, valuable content and possibly their strategy to get people to htmx, or reuse a domain and website they already have.


I totally get the focus on avoiding “layers”, it’s something I’m very mindful of too.
Thank you for the insight, I’ll have a closer look into it, although I’m a little bit skeptical about having to integrate additional extensions and workflows, which is it’s own bag of worms for maintainability, longevity, and complexity in general.


11ty = Eleventy? Are you familiar with Hugo? Do you have an opinion or experience between the two systems?
I’m somewhat dissatisfied with Hugo, which I have used for many years, but whenever I checked alternatives, nothing really spoke to me as a clear improvement worth the learning barrier and migration investment If I can use deno, a js static site generator could be viable too - something I traditionally avoided 🤔


Glad to see them mention dialog is in proposal for improvements. If popover covers more accessibility than dialog, that seems like a significant, surprising, and obvious shortcoming. Surely there’s technical and/or historical reasons for that, but still.


abstracting away determinism /s


This part from the article supports this sentiment:
In a pleasant surprise, reactions have been positive. Throttled organizations were “surprised and apologetic,” mistaking issues for malice rather than “ignorance, unawareness.”


I sneakily changed our pipeline to pull from the in-house docker registry, and for pipelines to require pulling from package repos only when locks changed. Our CI is faster than every other team, but nobody noticed.
So yeah, charge the companies! Please!
How come this is not an obvious improvement opportunity that materializes in other teams too, and visibly so, rather than “sneakily” hidden?
Isn’t it better not only for performance but also for reliability?


Yeah, I was quite irritated copying that for the quoting 😅


The article doesn’t even mention this critical risk and history. Huge gap.


Think about whether TODOs will be revisited, and how you can guarantee that. What do you gain and lose by replacing warnings with TODOs.
In my projects and work projects, I advocate for:
Dotnet warning suppression attributes have a Justification property. Editorconfig severity, disabling, suppression can have a comment.
If it’s your own project and you know when and how you will revisit, what do you gain by dropping the warning? A no-warning, but then you have TODOs with the same uncertainties?


I do. But I’m very selective and critical in choosing and trusting the right ones. They’re also not my only source.
I don’t think YouTube reviews are any worse than other forms of reviews. There are plenty of bad text reviews out there, too.


It’s a fund you donate to; they invest the money, then fund open source with the investment gains.
I posted a comment on this other post that summarizes the most relevant (because it wasn’t clear to me either, and as a note/explanation to myself too).


Data-driven grant model. There’s no perfect model for distributing OSS grants. Our approach is an open, measurable, algorithmic (but not automatic) model, […] We’re finalizing the first version of the selection model after the public launch, and its high-level description is at osendowment/model.
The fund invests all donations in a low-risk portfolio and uses only the investment income for grants, making it independent of annual budgets and market volatility. Even a modest $10M fund at this rate would generate ~$500K every year — enough for $10K grants to 50 critical open source projects.
Currently standing at $700k.
Regarding the model:
We aim to focus our support on the core of open-source ecosystems — like ~1% of packages accounting for 99% of downloads and dependencies. Our model shall be a data-driven approximation of the global usage of the open-source supply chain, helping to detect its most critical but underfunded elements.


Screenshots of both:
| “Classic” | “New” |
|---|---|
![]() |
![]() |
Well, you can see I use dark color scheme, which apparently got lost. Make a guess how much better I like that.
It’s not my full monitor width because of vertical browser tabs, but even then the horizontal distance between left nav bar and top right nav toolbar is horrendous.
The spacing is wasteful, the sizing is unnecessarily big.
It’s worse in every way. Less accessible, less readable, less scannable, less overview.
I wish they would simply drop their new design draft completely.
For anyone visiting the site thinking “looks like before for me” like I did, at the top there’s a link to “try out the new site”.
Their blog post, research blog post, previous community feedback, feedback form.


We onboarded our team with VS integrated Copilot.
I regularly use inline suggestions. I sometimes use the suggestions that go beyond what VS suggested before Copilot license… I am regularly annoyed at the suggestions moving off code, even greyed out sometimes being ambiguous with grey text like comma and semicolon, and control conflicting with basic cursor navigation (CTRL+Right arrow)
I am very selective about where I use Copilot. Even for simple systematic changes, I often prefer my own editing, quick actions, or multi cursor, because they are deterministic and don’t require a focused review that takes the same amount of time but with worse mental effect.
Probably more than my IDE “AI”, I use AI search to get information. I have the knowledge to assess results, and know when to check sources anyway, in addition, or instead.
My biggest issue with our AI is in the code some of my colleagues produce and give me for review, and that I don’t/can’t know how much they themselves thought about the issues and solution at hand. A lack of description, or worse, AI generated summaries, are an issue in relation to that.
A meta analysis is an interesting reaction to, or should I say founded in, the post title. But we better let go.