Europäisches Bezahlen bekommt Verstärkung! Die Commerzbank ist mit an Bord bei Wero 💪 Diese Entscheidung ist spannend, da die Bank ursprünglich zu den Protagonisten der pan-europäischen Bezahlinitiative EPI gehörte, sich aber zwischenzeitlich zurückgezogen hatte.
Czyli o prawdopodobnie najbezpieczniejszym i najbardziej szanującym naszą prywatność systemie na nasze telefony. Wpis będący dość obszerną recenzją i opisem, ale też poradnikiem jak zainstalować GrapheneOS na naszych smartfonach. Zapraszam do lektury!
@wolnoscwkieszeni używam
@GrapheneOS od czerwca i póki co nie mam żadnych problemów. Jedną aplikację, która nie działa jest #google#wallet, z którego i tak lepiej nie korzystać ze względu na prywatność.
Z ważnych aplikacji działa m.in. #moje#ing i #mObywatel oraz wszystkie inne, które używałem wcześniej na oficjalnym Android
ING setzt als Direktbank vor allem auf Privatkunden
Mehr Kunden, aber weniger Gewinn: So sieht die Bilanz der ING in Deutschland aus. Für das kommende Jahr plant der neue Vorstandschef Stoy strategische Änderungen. Von Ingo Nathusius.
This is the kind rhetoric I'm considering for my future political career. F@#ing sad it's come to this.
Same energy as when I tell wife beaters "they are keying their own car". I don't feel women are objects but for the kind of men that do, I think it reaches them.
[Thread, post or comment was deleted by the author]
At the point Nigel Farage announced his intention to stand for parliament “my heart sank”, admitted Trixy Sanderson, 42, formerly known as Annabelle Fuller. “It’s very triggering,” said Farage’s former lover and press aide. ...
There’s a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models ...
With the way current LLMs operate? The short answer is no. Most machine learning models can learn the probability distribution by performing backward propagation, which involves “trickling down” errors from the output node all the way back to the input. More specifically, the computer calculates the derivatives of each layer and uses that to slowly nudge the model towards the correct answer by updating the values in each neural layer. Of course, things like the attention mechanism resemble the way humans pay attention, but the underlying processes are vastly different.
In the brain, things don’t really work like that. Neurons don’t perform backpropagation, and, if I remember correctly, instead build proteins to improve the conductivity along the axons. This allows us to improve connectivity in a neuron the more current passes through it. Similarly, when multiple neurons in a close region fire together, they sort of wire together. New connections between neurons can appear from this process, which neuroscientists refer to as neuroplasticity.
When it comes to the Doom example you’ve given, that approach relies on the fact that you can encode the visual information to signals. It is a reinforcement learning problem where the action space is small, and the reward function is pretty straight forward. When it comes to LLMs, the usual vocabulary size of the more popular models is between 30-60k tokens (these are small parts of a word, for example “#ing” in “writing”). That means, you would need a way to encode the input of each to feed to the biological neural net, and unless you encode it as a phonetic representation of the word, you’re going to need a lot of neurons to mimic the behaviour of the computer-version of LLMs, which is not really feasible. Oh, and let’s not forget that you would need to formalize the output of the network and find a way to measure that! How would we know which neuron produces the output for a specific part of a sentence?
We humans are capable of learning language, mainly due to this skill being encoded in our DNA. It is a very complex problem that requires the interaction between multiple specialized areas: e.g. Broca’s (for speech), Wernicke’s (understanding and producing language), certain bits in the lower temporal cortex that handle categorization of words and other tasks, plus a way to encode memories using the hippocampus. The body generates these areas using the genetic code, which has been iteratively improved over many millennia. If you dive really deep into this subject, you’ll start seeing some scientists that argue that consciousness is not really a thing and that we are a product of our genes and the surrounding environment, that we act in predefined ways.
Therefore, you wouldn’t be able to call a small neuron array conscious. It only elicits a simple chemical process, which appears when you supply enough current for a few neurons to reach the threshold potential of -55 mV. To have things like emotion, body autonomy and many other things that one would think of when talking about consciousness, you would need a lot more components.
I remember showing my late dad Hot Fuzz. He was laughing so hard and practically wheezing at how relatable it was, from a US law enforcement background lol. (Especially when he’s fielding outrageous questions at the primary school LOL)
That was a really good time spent with my dad. I love that movie. We’d be quoting it all the time hahaha.
“Aww man did ya say anythin cool?”
“I uh, smashed him over the head with a pot and said ‘playtime’s over’.”
AITAH for telling my sister her "boundary" will destroy her relationship with her nephew. ( old.reddit.com )
This is an automated archive made by the Lemmit Bot. ...
What's up with that?
[Thread, post or comment was deleted by the author]
‘He was a deeply unembarrassed racist’: Nigel Farage, by those who have known him ( www.theguardian.com )
At the point Nigel Farage announced his intention to stand for parliament “my heart sank”, admitted Trixy Sanderson, 42, formerly known as Annabelle Fuller. “It’s very triggering,” said Farage’s former lover and press aide. ...
Could real neurons be used for LLMs?
There’s a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models ...
Of all movies that you gotta watch more than once to really understand, what is your favorite and why?