• 2 Posts
  • 496 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Git. Git git git.
    If it is text and can be modified from multiple places, should have a single “main” branch and feature work done independently on separate “branches”. Or even just a “back this up”.
    Git.

    Git is text based version control (tho it will do binary file, just not elegantly).

    So yeh, git.
    GitHub is easy to host on, but owned by Microsoft and is somewhat proprietary (by the time issues and other enhancements GitHub provides), but at the end of the day it is git with authentication and is on the ol “cloud”.
    Plenty of ways to replicate this if it’s just for you



  • The retirement of Ingress NGINX was announced for March 2026, after years of public warnings that the project was in dire need of contributors and maintainers. There will be no more releases for bug fixes, security patches, or any updates of any kind after the project is retired.
    This cannot be ignored, brushed off, or left until the last minute to address. We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like Gateway API or one of the many third-party Ingress controllers immediately.

    I know it’s literally the first paragraph, but I thought it worth commenting for those that only read the title & comments



  • The bubble is propped up by governments.
    They don’t need “as good as an employee but faster”. They just need “faster”, so they can process gathered data on an enormous scale to filter out the “potentially good” from the “not worth looking at”.
    Then they use employees to actually assess the “potentially good” data.

    Seriously, intelligence agencies don’t need “good ai”, they just need “good enough ai”.
    And they have that already.


  • Yeh, either proxy editing (where it’s low res versions until export).

    Or you could try a more suitable intermediary codec.
    I presume you are editing h.264 or something else with “temporal compression”. Essentially there are a few full frames every second, and the other frames are stored as changes. Massively reduces file size, but makes random access expensive as hell.

    Something like ProRes, DNxHD… I’m sure there are more. They store every frame, so decoding doesn’t require loading the last full frame and applying the changes to the current frame.
    You will end up with massive files (compared to h.264 etc), but they should run a lot better for editing.
    And they are lossless, so you convert source footage then just work away.

    Really high res projects will combine both of these. Proxy editing with intermediary codecs




  • What I’d recommend is setting up a few testing systems with 2-3GB of swap or more, and monitoring what happens over the course of a week or so under varying (memory) load conditions. As long as you haven’t encountered severe memory starvation during that week – in which case the test will not have been very useful – you will probably end up with some number of MB of swap occupied.

    And

    [… On Linux Kernel > 4.0] having a swap size of a few GB keeps your options open on modern kernels.

    And finally

    For laptop/desktop users who want to hibernate to swap, this also needs to be taken into account – in this case your swap file should be at least your physical RAM size.




  • In my experience, a Scheduler is something that schedules time on the CPU for processes (threads).

    So 10 processes (threads) say “I need to do something”:
    2 of those threads are “ready to continue” because they were previously waiting on some Disk IO (and responsibly released thread control while data was fetched).
    1 of the threads says “this is critical for GPU operations”.
    1 of those threads self declares it is elevated priority.

    The scheduler decides which of those threads actually gets time on an available CPU core to be processed.




  • And then OneDrive comes along, someone accidentally saved “to the cloud” (IE the default windows location of OneDrive). And of course someone (you) has to fix all the desync bullshit.
    Fuck excel, fuck Microsoft, fuck OneDrive!

    Thank god my company is transitioning to a decent no code solution (nocobase plus literally anything that can interact with postgres - currently n8n but not yet limited to that. It’s a transition from excel, literally anything is better! (Tho, nocobase is awesome, non has it’s perks)).
    Many parentheses, soz.
    Fuck excel, use a database!






  • Heck yeh! Great work.
    I think most critique has been covered.

    I consider too-many-indentations to be a code smell.
    Not actually an issue, but maybe there is…

    There is nothing wrong with your code, and no reason to change it (beyond error catching as you have discovered). It runs, is easy to follow, and doesn’t over-complicate.

    I like descriptive function names and early returns (ie, throw or return on all the conditions that means this function shouldn’t continue, then process the parameters to return a result).
    This could massively clean up what’s going on.
    There could be a “getUserCommand()” that returns the desired number, or 0 if it’s invalid.
    If the returned value is 0, then break.
    If the returned value is 6, then print values; then break.
    Otherwise we know the value should be 1-5.

    You could use an Enum to define the choices.
    This way, the print lines and the conditional tests can both reference the enum. It also removes “magic numbers” (IE values that appear in code with no explanation).
    In something simple like this, it doesn’t really matter. But it improves IDE awareness (helping language servers suggest code/errors/fixes). And Makes the code SOO much more descriptive (Ie “choice == 3” becomes “choice == Choices.Product”).