Profile pic

Riskable, riskable@programming.dev

Instance: programming.dev
Joined: 2 years ago
Posts: 9
Comments: 681

Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

RSS feed

Posts and Comments by Riskable, riskable@programming.dev

YES! I used to work for the payment card industry. They’re parasites! They serve no real purpose anymore in the age of the Internet. There’s no reason for them to exist.

They’re still charging transaction fees that were created in the time when everything had to be processed by hand.

Transaction fees should not exist!


Before the ID requirement, you simply used your voter registration card. You’d walk up to the reception counter, give them your card, and they’d verify your address (which was not on the card), and hand you the ballot.

Now they scan your state ID and look up your voter registration data. If the Internet connection goes down, they have printouts of the voter rolls (for that polling location) to use to match the ID to the person’s registration.

The ID doesn’t really add much, honestly. It’s not like the people behind the counter even look at your face to verify the photo. They’re literally just scanning it in to make sure it’s a valid ID… Which is just as valid as verifying your voter registration card.

Let’s say you show up to vote as someone else (voter fraud). You need to have that person’s voter registration card and know their address. You also need to show up at the correct voting venue and make sure to vote before they did or you’ll be caught. Everyone gets marked as having voted the moment they get their card checked.

Then later, when the actual voter shows up to vote (with a legit voter registration card), the people at the counter will notice something is wrong. The fraudulent voter will now be investigated.

It’s possible for this to happen but for it to happen at scale is way too complicated and would require way too many resources to pull off. The people that try it are risking too much for too little.

That’s why most voting fraud that happens is tiny and when people are caught it usually turns out that they’re… Mentally challenged. Like MAGA people…

https://en.wikipedia.org/wiki/Electoral_fraud_in_the_United_States


We do already have poop emoji gas.



…and burns people’s homes down due to lack of safety features.

…and children choke to death from easily removable small parts.

…and people get electrocuted because of a lack of warning label telling them not to use it in the bath.




Free shipping to send him away? I’ll pay that subscription 👍

 reply
1

RVA23 is a big deal because it allows the big players (e.g. Google, Amazon, Meta, OpenAI, Anthropic, and more) to avoid vendor lock-in for their super duper ultra wicked mega tuned-to-fuck-and-back specialty software (not just AI stuff). Basically, they can tune their software to a generic platform to the nth degree and then switch chips later if they want without having to re-work that level of tuning.

The other big reason why RISC-V is a big deal right now is energy efficiency. 40% of a data center’s operating cost is cooling. By using right-sized RISC-V chips in their servers they can save a ton of money on cooling. Compare that to say, Intel Xeon where the chips will be wasting energy on zillions of unused extensions and sub-architecture stuff (thank Transmeta for that). Every little unused part of a huge, power hungry chip like a Xeon eats power and generates heat.

Don’t forget that vector extensions are also mandatory in RVA23. That’s just as big a deal as the virtualization stuff because AI (which heavily relies on vector math) is now the status quo for data center computing.

My prediction is that AI workload enhancements will become a necessary feature in desktops and laptops soon too. But not because of anything Microsoft integrates into their OS and Office suites (e.g. Copilot). It’ll be because of Internet search and gaming.

Using an AI to search the Internet is such a vastly superior experience, there’s no way anyone is going to want to go back once they’ve tried it out. Also, in order for it to work well it needs to run queries on the user’s behalf locally. Not in Google or Microsoft’s cloud.

There’s no way end users are going to pay for an inferior product that only serves search results from a single company (e.g. Microsoft’s solution—if they ever make one—will for sure use Bing and it would never bother to search multiple engines simultaneously).


I play Beat Saber every day. Been doing this for over three years now and I’ve actually become the #1 player:

https://beatleader.com/ranking/1?sortBy=playCount

(In play count, hehe)

I burn about 650-750 calories over the course of an hour. I don’t take breaks between maps and I have never paused.

When I first started playing I could barely make it through 5-minute, 6 ⭐ maps. Now I can play 20-minute, 9 ⭐ maps and just keep playing until the battery on my headset runs out.

I haven’t been this healthy or fit since I was in high school (I’m 47).




It’s a common problem with electromechanical switches. Especially if you have salty, sweaty hands.

The solution is to get a keyboard that uses contactless switches like hall effect, TMR, optical, etc.

I designed and built my own hall effect keyboard (and custom 3D printed Void Switches) from scratch because I was having to replace my keyboard once every 18 months or so because of the exact problem you describe 🤷


Some day, a lucky archeologist will unearth the one true archive from an innocent-looking tarball.


Vegan Linux users can compile their own protein from source.

Their purity level is so high that they can kill -9 anyone wearing a leather belt with just a glance.


Not at this point, no. Not unless you know how to setup/manage docker images and have a GPU with at least 16GB of VRAM.

Also, if you’re not using Linux forget it. All the AI stuff anyone would want to run is a HUGE pain in the ass to run on Windows. The folks developing these models and the tools to use them are all running Linux. Both on their servers and on their desktops and it’s obvious once you start reading the README.md for most of these projects.

Some will have instructions for Windows but they’ll either be absolutely enormous or they’ll hand wave away the actual complexity, “These instructions assume you know the basics of advanced rocket science and quantum mechanics.”


It depends on the size of the content on the page. As long as it’s small enough to be contained within the context window, it should do a good job.

But that’s all irrelevant since the point of the summary is just to give you a general idea of what’s on the page. You’ll still get the actual title and whatnot.

Using an LLM to search on your behalf is like using grep to filter out unwanted nonsense. You don’t use it like, “I’m feeling lucky” and pray for answers. You still need to go and open the pages in the results to get at what you want.


AI models aren’t trained on anything “stolen”. When you steal something, the original owner doesn’t have it anymore. That’s not being pedantic, it’s the truth.

Also, if you actually understand how AI training works, you wouldn’t even use this sort of analogy in the first place. It’s so wrong it’s like describing a Flintstones car and saying that’s how automobiles work.

Let’s say you wrote a book and I used it as part of my AI model (LLM) training set. As my code processes your novel, token-by-token (not word-by-word!), it’ll increase or decrease a floating point value by something like 0.001. That’s it. That’s all that’s happening.

To a layman, that makes no sense whatever but it’s the truth. How can a huge list of floating point values be used to generate semi-intelligent text? That’s the actually really fucking complicated part.

Before you can even use a model you need to tokenize the prompt and then perform an inference step which then gets processed a zillion ways before that .safetensors file (which is the AI model) gets used at all.

When an AI model is outputting text, it’s using a random number generator in conjunction with a word prediction algorithm that’s based on the floating point values inside the model. It doesn’t even “copy” anything. It’s literally built upon the back of an RNG!

If an LLM successfully copies something via it’s model that is just random chance. The more copies of something that went into its training, the higher the chance of it happening (and that’s considered a bug, not a feature).

There’s also a problem that can occur on the opposite end: When a single set of tokens gets associated with just one tiny bit of the training set. That’s how you can get it to output the same thing relatively consistently when given the same prompt (associated with that set of tokens). This is also considered a bug and AI researchers are always trying to find ways to prevent this sort of thing from happening.


No it can’t do that. It’s an LLM, it can only generate the next word in a sequence.

Your knowledge is out of date, friend. These days you can configure an LLM to run tools like curl, nmap, ping, or even write then execute shell scripts and Python (though, in a sandbox for security).

Some tools that help you manage the models are preconfigured to make it easy for them to search the web on your behalf. I wouldn’t be surprised if there’s a whole ecosystem of AI tools just for searching the web that will emerge soon.

What Mozilla is implementing in Firefox will likely start with cloud-based services but eventually it’ll just be using local models, running on your PC. Then all those specialized AI search tools will become less popular as Firefox’s built-in features end up being “good enough”.


Have you tried using an LLM configured to search the Internet for you? It’s amazing!

Normal search: Loads of useless results, ads, links that are hidden ads, scams, and maybe on like the 3rd page you’ll find what you’re looking for.

AI search: It makes calls out to Google and DDG (or any other search engines you want) simultaneously, checks the content on each page to verify relevancy, then returns a list of URLs that are precisely what you want with summaries of each that it just generated on the fly (meaning: They’re up to date).

You can even do advanced stuff like, “find me ten songs on YouTube related to breakups and use this other site to convert those URLs to .ogg files and put them in my downloads folder.”

*Local*, FOSS AI running on your own damned PC is fucking awesome. I seriously don’t understand all the hate. It’s the technology everyone’s always wanted and it gets better every day.


RSS feed

Posts by Riskable, riskable@programming.dev

Comments by Riskable, riskable@programming.dev

YES! I used to work for the payment card industry. They’re parasites! They serve no real purpose anymore in the age of the Internet. There’s no reason for them to exist.

They’re still charging transaction fees that were created in the time when everything had to be processed by hand.

Transaction fees should not exist!


Before the ID requirement, you simply used your voter registration card. You’d walk up to the reception counter, give them your card, and they’d verify your address (which was not on the card), and hand you the ballot.

Now they scan your state ID and look up your voter registration data. If the Internet connection goes down, they have printouts of the voter rolls (for that polling location) to use to match the ID to the person’s registration.

The ID doesn’t really add much, honestly. It’s not like the people behind the counter even look at your face to verify the photo. They’re literally just scanning it in to make sure it’s a valid ID… Which is just as valid as verifying your voter registration card.

Let’s say you show up to vote as someone else (voter fraud). You need to have that person’s voter registration card and know their address. You also need to show up at the correct voting venue and make sure to vote before they did or you’ll be caught. Everyone gets marked as having voted the moment they get their card checked.

Then later, when the actual voter shows up to vote (with a legit voter registration card), the people at the counter will notice something is wrong. The fraudulent voter will now be investigated.

It’s possible for this to happen but for it to happen at scale is way too complicated and would require way too many resources to pull off. The people that try it are risking too much for too little.

That’s why most voting fraud that happens is tiny and when people are caught it usually turns out that they’re… Mentally challenged. Like MAGA people…

https://en.wikipedia.org/wiki/Electoral_fraud_in_the_United_States


We do already have poop emoji gas.



…and burns people’s homes down due to lack of safety features.

…and children choke to death from easily removable small parts.

…and people get electrocuted because of a lack of warning label telling them not to use it in the bath.




Free shipping to send him away? I’ll pay that subscription 👍

 reply
1

RVA23 is a big deal because it allows the big players (e.g. Google, Amazon, Meta, OpenAI, Anthropic, and more) to avoid vendor lock-in for their super duper ultra wicked mega tuned-to-fuck-and-back specialty software (not just AI stuff). Basically, they can tune their software to a generic platform to the nth degree and then switch chips later if they want without having to re-work that level of tuning.

The other big reason why RISC-V is a big deal right now is energy efficiency. 40% of a data center’s operating cost is cooling. By using right-sized RISC-V chips in their servers they can save a ton of money on cooling. Compare that to say, Intel Xeon where the chips will be wasting energy on zillions of unused extensions and sub-architecture stuff (thank Transmeta for that). Every little unused part of a huge, power hungry chip like a Xeon eats power and generates heat.

Don’t forget that vector extensions are also mandatory in RVA23. That’s just as big a deal as the virtualization stuff because AI (which heavily relies on vector math) is now the status quo for data center computing.

My prediction is that AI workload enhancements will become a necessary feature in desktops and laptops soon too. But not because of anything Microsoft integrates into their OS and Office suites (e.g. Copilot). It’ll be because of Internet search and gaming.

Using an AI to search the Internet is such a vastly superior experience, there’s no way anyone is going to want to go back once they’ve tried it out. Also, in order for it to work well it needs to run queries on the user’s behalf locally. Not in Google or Microsoft’s cloud.

There’s no way end users are going to pay for an inferior product that only serves search results from a single company (e.g. Microsoft’s solution—if they ever make one—will for sure use Bing and it would never bother to search multiple engines simultaneously).


I play Beat Saber every day. Been doing this for over three years now and I’ve actually become the #1 player:

https://beatleader.com/ranking/1?sortBy=playCount

(In play count, hehe)

I burn about 650-750 calories over the course of an hour. I don’t take breaks between maps and I have never paused.

When I first started playing I could barely make it through 5-minute, 6 ⭐ maps. Now I can play 20-minute, 9 ⭐ maps and just keep playing until the battery on my headset runs out.

I haven’t been this healthy or fit since I was in high school (I’m 47).




It’s a common problem with electromechanical switches. Especially if you have salty, sweaty hands.

The solution is to get a keyboard that uses contactless switches like hall effect, TMR, optical, etc.

I designed and built my own hall effect keyboard (and custom 3D printed Void Switches) from scratch because I was having to replace my keyboard once every 18 months or so because of the exact problem you describe 🤷


Some day, a lucky archeologist will unearth the one true archive from an innocent-looking tarball.


Vegan Linux users can compile their own protein from source.

Their purity level is so high that they can kill -9 anyone wearing a leather belt with just a glance.


Not at this point, no. Not unless you know how to setup/manage docker images and have a GPU with at least 16GB of VRAM.

Also, if you’re not using Linux forget it. All the AI stuff anyone would want to run is a HUGE pain in the ass to run on Windows. The folks developing these models and the tools to use them are all running Linux. Both on their servers and on their desktops and it’s obvious once you start reading the README.md for most of these projects.

Some will have instructions for Windows but they’ll either be absolutely enormous or they’ll hand wave away the actual complexity, “These instructions assume you know the basics of advanced rocket science and quantum mechanics.”


It depends on the size of the content on the page. As long as it’s small enough to be contained within the context window, it should do a good job.

But that’s all irrelevant since the point of the summary is just to give you a general idea of what’s on the page. You’ll still get the actual title and whatnot.

Using an LLM to search on your behalf is like using grep to filter out unwanted nonsense. You don’t use it like, “I’m feeling lucky” and pray for answers. You still need to go and open the pages in the results to get at what you want.


AI models aren’t trained on anything “stolen”. When you steal something, the original owner doesn’t have it anymore. That’s not being pedantic, it’s the truth.

Also, if you actually understand how AI training works, you wouldn’t even use this sort of analogy in the first place. It’s so wrong it’s like describing a Flintstones car and saying that’s how automobiles work.

Let’s say you wrote a book and I used it as part of my AI model (LLM) training set. As my code processes your novel, token-by-token (not word-by-word!), it’ll increase or decrease a floating point value by something like 0.001. That’s it. That’s all that’s happening.

To a layman, that makes no sense whatever but it’s the truth. How can a huge list of floating point values be used to generate semi-intelligent text? That’s the actually really fucking complicated part.

Before you can even use a model you need to tokenize the prompt and then perform an inference step which then gets processed a zillion ways before that .safetensors file (which is the AI model) gets used at all.

When an AI model is outputting text, it’s using a random number generator in conjunction with a word prediction algorithm that’s based on the floating point values inside the model. It doesn’t even “copy” anything. It’s literally built upon the back of an RNG!

If an LLM successfully copies something via it’s model that is just random chance. The more copies of something that went into its training, the higher the chance of it happening (and that’s considered a bug, not a feature).

There’s also a problem that can occur on the opposite end: When a single set of tokens gets associated with just one tiny bit of the training set. That’s how you can get it to output the same thing relatively consistently when given the same prompt (associated with that set of tokens). This is also considered a bug and AI researchers are always trying to find ways to prevent this sort of thing from happening.


No it can’t do that. It’s an LLM, it can only generate the next word in a sequence.

Your knowledge is out of date, friend. These days you can configure an LLM to run tools like curl, nmap, ping, or even write then execute shell scripts and Python (though, in a sandbox for security).

Some tools that help you manage the models are preconfigured to make it easy for them to search the web on your behalf. I wouldn’t be surprised if there’s a whole ecosystem of AI tools just for searching the web that will emerge soon.

What Mozilla is implementing in Firefox will likely start with cloud-based services but eventually it’ll just be using local models, running on your PC. Then all those specialized AI search tools will become less popular as Firefox’s built-in features end up being “good enough”.


Have you tried using an LLM configured to search the Internet for you? It’s amazing!

Normal search: Loads of useless results, ads, links that are hidden ads, scams, and maybe on like the 3rd page you’ll find what you’re looking for.

AI search: It makes calls out to Google and DDG (or any other search engines you want) simultaneously, checks the content on each page to verify relevancy, then returns a list of URLs that are precisely what you want with summaries of each that it just generated on the fly (meaning: They’re up to date).

You can even do advanced stuff like, “find me ten songs on YouTube related to breakups and use this other site to convert those URLs to .ogg files and put them in my downloads folder.”

*Local*, FOSS AI running on your own damned PC is fucking awesome. I seriously don’t understand all the hate. It’s the technology everyone’s always wanted and it gets better every day.