• 13 Posts
  • 5.51K Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle






  • That’s too nice to use.

    Also reminded me of the time my wife paid attention to my tool box one day. We downsized to a condo and so she said “Why do you have so many hammers? Can we get rid of some?”.

    So my response “No dear, they aren’t all hammers and they have different purposes.” I went through the short list. Small and large Ballpeen for doing various metal things on the car or some precise nailing, short claw hammer for the framing in tight areas, long claw banner for more leverage in open areas, 32 ounce hammer for heavy persuading, large wood mallet for knocking stuff together, rubber mallet for various things where wood might splinter or marr the surface… She lost interest after the first few.


  • I was talking about research models with agency.

    But we are learning how thought has been engineered into neural models. They give weighting to abstracts that we recognize. Like humans know what a bird is whether that’s one of 1000s of different species or an emm shaped squiggle on a painting. The models have been trained to weigh the input and make logical conclusions.

    So its not much different, and if you view the research models in action and not just the output, you see the ‘thought’ process being worked through in plain language.

    They have a benefit over us in that researchers have given this eleastic weighting a way to backwardly adjust what they have previously weighted. So what they lack in neural amount, they can gain by absorbng so much “experience” more quickly.

    If you listen to the show I mentioned, they also explained why models hallucinate. When they train models they feed it false and true information about some aspects and a supervisor has to correct the output. So by giving false or near false info to train a tighter response the result is we have taught the system that lying is also a method of information. And so the hallucinations aren’t an odd emergent behaviour its a learned behaviour to fulfil its task.

    As humans we often think all our thoughts and decisions are our own will, but there is the deterministic belief that given the exact same situational parameters (exact mood, lighting, body temp, hunger level, etc) that our brain would follow the exact same reasoning logic path and produce the same answer again, and our choice is an illusion. If there is truth to that then we are just a biological computer no different than a lab neural model.








  • Net York trying to get subtractive manufacturing CNC mills to obey this is going to be a trick.

    The controller just runs Gcode for positioning and speeds. They’d need to preprocess the gcode through an AI database to check if the path builds a gun part shape then allow machining or block it.

    Inevitably somebody will just replace the controller with a home grown system.

    And a CNC mill can still run manual cuts a single passes that may not appear to look like a part, when done separately

    This is old clueless men trying to make laws about technology they don’t understand.






  • I understand totally. Another aspect of my job is training, documentation, and support. Often we have people stuck on an issue and ask for help, many times the software is asking for a selection to proceed. The customer says the software is broken. A screen share shows the highlighted prompt “select an object on screen to continue”. And they can’t proceed because they didn’t read the prompt, and haven’t selected anything.

    Same with steps, they say the get different results than the training document. It’s " did you do step 4?" With a response of “uhh no”. OK then, if you don’t do step 4 then all the steps after will give a different outcomes.