• 2 Posts
  • 1.26K Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Scott Manley has a video on this:
    https://youtu.be/DCto6UkBJoI

    My takeaway is that it isn’t unfeasible. We already have satellites that do a couple kilowatts, so a cluster of them might make sense. In isolation, it makes sense.
    But there is launch cost, and the fact that de-orbiting/de-commissioning is a write-off, and the fact that preferred orbits (lots of sun) will very quickly become unavailable.
    So there is kinda a graph where you get the preferred orbit, your efficiency is good enough, your launch costs are low enough.
    But it’s junk.
    It’s literally investing in junk.
    There is no way this is a legitimate investment.

    It has a finite life, regardless of how you stretch your tech. At some point, it can’t stay in orbit.
    It’s AI. There is no way humans are in a position to lock in 4 years of hardware.
    It’s satellites. There are so many factors outside of our control that (beyond launch orbit success), that there is a massive failure rate.
    It’s rockets. They are controlled explosives with 1 shot to get it right. Again, massive failure rate.

    It just doesn’t make sense.
    It’s feasible. I’m sure humanity would learn a lot. AI is not a good use of kilowatts of power in space. AI is not a good use of the finite resource of earth to launch satellites (never mind a million?!). AI is not a good reason to pullute the “good” bits of LEO



  • I’d take each of your metrics and multiply it by 10, and then multiply it by another 10 for everything you haven’t thought about, then probably double it for redundancy.
    Because “fire temp” is meaningless in isolation. You need to know the temperature is evenly distributed (so multiple temperature probes), you need to know the temperature inside and the temperature outside (so you know your furnace isn’t literally melting), you need to know it’s not building pressure, you need to know it’s burning as cleanly as possible (gas inflow, gas outflow, clarity of gas in, clarity of gas out, temperature of gas in, temperature of gas out, status of various gas delivery systems (fans (motor current/voltage/rpm/temp), filters, louvres, valves, pressures, flow rates)), you need to know ash is being removed correctly (that ash grates, shakers, whatever are working correctly, that ash is cooling correctly, that it’s being transported away etc).
    The gas out will likely go through some heat recovery stages, so you need to know gas flow through those and water flow through those. Then it will likely be scrubbed of harmful chemicals, so you need to know pressures, flow rates etc for all that.
    And every motor will have voltage/current/rpm/temperature measurements. Every valve will have a commanded position and actual position. Every pipe will have pressure and temperature sensors.

    The multiple fire temperature probes would then be condensed into a pertinent value and a “good” or “fault” condition for the front panel display.
    The multiple air inlet would be condensed into pertinent information and a good/fault condition.
    Pipes of a process will have temperature/pressure good/fault conditions (maybe a low/good/over?)

    And in the old days, before microprocessors and serial communications, it would have been a local-to-sensors control/indicator panel with every reading, then a feed back to the control room where it would be “summarised”. So hundreds of signals from each local control/indicator panel.

    Imagine if the control room commanded a certain condition, but it wasn’t being achieved because a valve was stuck or because some local control over-rode it.
    How would the control room operators know where to start? Just guess?
    When you see a dangerous condition building, you do what is needed to get it under control and it doesn’t happen because…
    You need to know why.




  • The bubble is propped up by governments.
    They don’t need “as good as an employee but faster”. They just need “faster”, so they can process gathered data on an enormous scale to filter out the “potentially good” from the “not worth looking at”.
    Then they use employees to actually assess the “potentially good” data.

    Seriously, intelligence agencies don’t need “good ai”, they just need “good enough ai”.
    And they have that already.




  • 27:1 kd would be accused of cheating in video games.

    Because this stat isn’t really a stat and isn’t hyped or published, I’m going by DDG AI assist which suggests US k:d in Iraq is 44:1

    The U.S. military suffered approximately 4,492 deaths and around 32,292 wounded during the Iraq War, while estimates suggest around 200,000 Iraqi civilians were killed. This results in a rough kill-to-death ratio of about 44:1, favoring U.S. forces, though this does not account for all combatants and the complexities of the conflict.

    Considering that Ukraine isn’t killing civilians… Classic AI bullshit uselessness.
    If I killed 27 enemy aggressors while defending my country, I would die happy. I don’t ever want to be in that position, I don’t think anyone should ever be in that position. But that is an achievement, under the circumstances, to be proud of


  • TIDALs continued awesomeness suggests suitable alternatives.
    Spotify pays Joe Rogan how much? And pays artists how little?
    TIDAL does music.
    I changed a few years ago, and all I miss are the integrations.
    I’m lucky that I have decent speakers & dac on my desktop, and decent IEMs. So I can listen to music where I want.
    But I can’t buy a “tidal speaker” in the way I could buy a “Spotify speaker”.
    But I’m arrogantly confident enough to waste some money solving this with home assistant, some rpi/nucs, and some speakers. I feel I don’t need (I actually don’t want a vendor locked in) “just works” solution, and I’m happy rolling my own.


  • Yeh, either proxy editing (where it’s low res versions until export).

    Or you could try a more suitable intermediary codec.
    I presume you are editing h.264 or something else with “temporal compression”. Essentially there are a few full frames every second, and the other frames are stored as changes. Massively reduces file size, but makes random access expensive as hell.

    Something like ProRes, DNxHD… I’m sure there are more. They store every frame, so decoding doesn’t require loading the last full frame and applying the changes to the current frame.
    You will end up with massive files (compared to h.264 etc), but they should run a lot better for editing.
    And they are lossless, so you convert source footage then just work away.

    Really high res projects will combine both of these. Proxy editing with intermediary codecs



  • What I’d recommend is setting up a few testing systems with 2-3GB of swap or more, and monitoring what happens over the course of a week or so under varying (memory) load conditions. As long as you haven’t encountered severe memory starvation during that week – in which case the test will not have been very useful – you will probably end up with some number of MB of swap occupied.

    And

    [… On Linux Kernel > 4.0] having a swap size of a few GB keeps your options open on modern kernels.

    And finally

    For laptop/desktop users who want to hibernate to swap, this also needs to be taken into account – in this case your swap file should be at least your physical RAM size.



  • Writing reports is hard? Fuck paper work? Policing used to be easier?
    Great, the reports are written for you and the paper work is done for you.
    You are still fucking liable for their contents, as you are (or should be) for your actions.

    Recorded and written reports are the backbone of accountability.

    Don’t want to get fucked by the legal system because you have neglected your duties? Don’t neglect your duties. Do the reporting, do the paper work.

    Using LLM in such reports should be equivalent of perjury. Use LLMs to create bullet points, turn that into a draft (or just submit the bullet points, because someone is likely to feed the report back into an LLM to turn it into bullet points).
    But know that you are (or should be) accountable for every last word on that report!


  • In my experience, a Scheduler is something that schedules time on the CPU for processes (threads).

    So 10 processes (threads) say “I need to do something”:
    2 of those threads are “ready to continue” because they were previously waiting on some Disk IO (and responsibly released thread control while data was fetched).
    1 of the threads says “this is critical for GPU operations”.
    1 of those threads self declares it is elevated priority.

    The scheduler decides which of those threads actually gets time on an available CPU core to be processed.



  • Raspberry pis are an easy intro to actually using computers (instead of using something like windows).
    Raspbian is great (based on Debian) and there is a HUGE community for it.

    So yeh, it’s a great started for $25, as long as you have a PSU and SD Card. And an hdmi cable + monitor + keyboard at your disposal (and a mouse if you are installing a desktop environment (IE something like windows, whereas headless is a full screen CLI).
    And don’t get your hopes up for a windows replacement.

    But… Why not run a Virtual Machine? If you have a windows machine, run VirtualBox, create a VM and install Debian on it?
    That’s free. You can tinker and play.
    And the only thing you are missing from an actual raspberry pi is that it isn’t a standalone device (IE your desktop has to be on for it to be running), and it doesn’t have GPIO (ie hardware pins. And if this is your goal, there are other ways).

    If you really really want a computer that is on all the time running Linux (Debian, a derivative (like raspbian) or some other distro) - aka a server - then there are plenty of other options where the only drawback is lack of GPIO (which, in my experience, is rarely a drawback).
    And that is literally any computer you can get your hands on. Because the raspberry pi trades A LOT for its form factor, the ethernet speed is limited, the bus speed is limited (impacting USB and ethernet (and ram?)), the SD card is slower and will fail faster than any HDD/SSD. The benefit is the GPIO, the very low power draw, and the form factor - rarely actually a benefit.

    I’d say, play around with some virtual box VMs. See what you want, other than Fear Of Missing Out (things like PiHole? They run on Debian, or even in a docker container). Then see if you actually want a home server, and what you want to run on it.
    It’s likely you won’t want a raspberry pi, but a $150 mini pc that can actually do what you want.