• 1 Post
  • 11.8K Comments
Joined 3 years ago
cake
Cake day: July 26th, 2023

help-circle

  • They were relying on users to click I am definitely over 18. So there is by definition no possible way to know if children’s data was being misused except to the extent it obviously was.

    Age verification should have been a device side thing. I should be able to set up my kids phone / tablet with their date of birth and if they’re not old enough then they can’t access certain content. That way the only person who needs to verify the age of someone is me.

    My ISP shipped with age restriction on by default, (personally I think that’s a stupid default but that’s a separate issue), but although annoying it was easy to get rid of because I’m the account holder. I didn’t have to scan my ID though.

    Anyway we all know this has been nothing to do with kids anyway.



  • Yes but also these people, especially the likes of Donald Trump, don’t really understand how the world works so all their plans for controlling it never come off, because things that would be blindingly obvious to most people, are utterly obtrused to them.

    You know like the idea that people would just not use privacy invading applications. How many people do you think have actually submitted their IDs versus have just downloaded a VPN or convented their controls in other ways? Then they’ll make VPNs illegal and think that they’ve in some way achieved something, but they will always be a workaround for their authority, because the people they are trying to control are smarter than them.





  • To me it doesn’t matter if you use biological learning or the method described below

    What would biological learning for an AI look like? I don’t even know what this sentence means or what you’re trying to convey.

    both can self adjust

    No they can’t. That’s the whole point, they self-adjust they have no free will so they have no ability to take self-modification actions.

    they can go off and self browse the web or use camera vision etc

    Yes, but so can a non-intelligent computer program. The ability to access the internet has nothing to do with intelligence. See humans.

    The old research you talk about science felt hit a wall decades ago, but later (now) they realized we just didn’t feed it enough info.

    I think this is where you’re getting confused. The “old research”, aka neural networks didn’t hit a wall, it’s just it was never particularly useful outside of very niche circumstances. But it’s been used extensively in OCR for decades. But it is not intelligence anymore than a plant turning towards the sun is intelligence. It’s just evolutionarily enforced stimulation response. Large language models work on a completely different concept, you don’t get good results by feeding neural networks lots of input because it just overwhelms them with signal and they can’t optimise towards anything. If you built a neural network with a 100 trillion nodes you might actually get something useful, but it still wouldn’t be artificial intelligence and no one’s doing that anyway because it’s prohibitively processor intensive and anyway LLMs exist.

    But if you look up any recent papers on what science is doing in this field you’ll see what I mean, even what appears to be emergent behaviours, which may just be a result of neural learning methods whether human or silicon based.

    It’s important to realise that words mean the things they mean. Emergent behaviour just means that they behaviour is emergent, it doesn’t mean that the behaviour is intentional or directed. Large crowds have emerged behaviour, it doesn’t mean that there’s some hive mind control everyone.



  • I’m definitely both but that’s because my cup wear is a random collection of cups that I acquired somehow over the years and stuff I actually went out and bought and only the bought stuff matches.

    No one’s going to take my bee cup away from me, it’s got pictures of bees on and says “beeee happy” and can contain an Olympic swimming pool with a drink. I think I got it in an Easter egg.










  • They’re not though, just because you’ve found a snippet that valifies your incorrect understanding of AI doesn’t mean that it’s right. And since apparently the way you do research is to Google something and then click on the first result I’ll explain.

    Large language models don’t use the standardly understood neural network that people are familiar with. They use a lot of mathematics and high dimensional spaces to generate their responses. They’re not achieving that by simulating neural pathways.

    Neural networks simulate simple brains in order to have output, it’s a much older version of AI and is more like evolution simulation than it is artificial intelligence, there’s plenty of videos on this on YouTube dating back well over a decade. What you are referring to in your original comment sounds very much like a neural network that has been trained on character recognition, again loads of YouTube videos on the topic. But there’s no understanding there there’s no comprehension and there’s no learning. It’s just a system evolving to identify patterns.

    But none of this is anything close to an early version of artificial general intelligence because it’s all just responding to input. If you initialise a large language model and then just leave it it’ll sit there and do nothing, a true artificial intelligence would have its own defined goals and take action to achieve those defined goals on its own without any input from a human, it would also be capable of self-modification. LLMs, and neural networks don’t do either of those things.