• 211 Posts
  • 1.31K Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle


  • I don’t see anything as having to come before learning Rust.

    If something about Rust requires more technical knowledge, then that learning is part of learning Rust, even if you could have learned it separately beforehand.

    Better start learning Rust to get in there instead of delaying, which adds risk to never arriving, loss of interest, or lack of progress on the goal of learning Rust, with a lack of satisfaction.

    Once you learned Rust, you can look around to gain broader knowledge and expertise, if you want, but that’s not necessary to learn and make use of Rust.



  • Great analysis / report. At times a bit repetitive, but that could be useful for people skimming or jumping or quoting as well.


    Despite 91% of CTOs citing technical debt as their biggest challenge, it doesn’t make the top five priorities in any major CIO survey from 2022–2024.

    Sad. Tragic.


    I’m lucky to be in a good, small company with a good, reasonable customer, where I naturally had and grew into having the freedom and autonomy to decide on things. The customer sets priorities, but I set mine as well, and tackle what’s appropriate or reasonable/acceptable. Both the customer and I have the same goals after all, and we both know it and collaborate.

    Of course, that doesn’t help me as a user when I use other software.


    Reading made me think of the recent EU digital regulations. Requiring due diligence, security practices, and transparency. It’s certainly a necessary and good step in the right direction to break away from the endless chase away from quality, diligence, and intransparency.





  • A library with no code, no support, no implementation, no guarantees, no bugs are “fixable” without unknown side effects, no fix is deterministic even for your own target language, …

    A spec may be language agnostic, but the language model depends on trained on implementations. So, do you end up with standard library implementations being duplicated, just possibly outdated with open bugs and holes and gaps and old constructs? And quality and coverage of spec implementation will vary a lot depending on your target language? And if there’s not enough conforming training it may not even follow the spec correctly? And then you change the spec for one niche language?

    If it’s a spect or LLM template, then that’s what it is. Don’t call it library. In the project readme don’t delay until the last third to actually say what it is or does.




  • The only way out of this is regulation, which requires political activism.

    The EU did some good process on that through GDPR and the newer digital laws regarding safety, disclosure, maintenance, and due diligence requirements. Prosecution with fines is there, but slow, and arguably too sporadic.

    Political activism in this direction is unthankful work and a lot of effort. I am reminded of someone who has pushed for public institutions to move away from US big tech for many years. Now Trump is the reason for change, and their effort can surely feel pointless.

    I do occasionally report GDPR violations, etc. That can feel pointless as well. But it’s necessary, and the only way to (support/influence) agencies to take action.





  • they asked me if I could develop some useful metrics for technical debt which could be surveyed relatively easily, ideally automatically

    This is where I would have said “no, that’s not possible” or had a discussion about risks where things you simply can’t cover with automated metrics would lead to misdirection and possibly negative instead of positive consequences.

    They then explore what technical debt is and notice that even many things outside of technical debt have significant impact you can’t ignore. I’m quite disappointed they don’t come back to their metrics task at all. How did they finish their task? Did they communicate and discuss all these broader concepts instead of implementing metrics?

    There’s some metrics you can implement on code. Test coverage, complexity by various metrics, function body length, etc. But they only ever cover small aspects of technical debt. Consequently, they can’t be a foundation for (continuously) steering debt payment efforts for most positive effects.

    I know my projects and can make a list of things and efforts and impacts and we can prioritize those. But I find the idea of (automated) metrics entirely inappropriate for observing or steering technical debt.


  • As a lead dev I have plenty of cases where I weigh effort vs impact and risk and conclude to “this is good enough for now”. Such cases are not poor management - which I assume you mean something like “we have to ship more faster, so do the shortest”. Sometimes cutting corners is the correct and good decision, sometimes the only feasible one, as long as you’re aware and weigh risks and consequences.

    We, and specifically I, do plenty of improvements where possible and reasonable. Whatever I visit, depending on how much effort it is. But sometimes effort is too much to be resolvable or investable.

    For context, I’m working on a project that has been running for 20 years.