cmhe, cmhe@lemmy.world

Instance: lemmy.world
Joined: 2 years ago
Posts: 0
Comments: 251

Posts and Comments by cmhe, cmhe@lemmy.world

That would actually be the wrong thing to want. In an ideal system trust would always begin by the owner of the hardware, where possible, not the software or vendor they decide to trust.

First the person that bought the system should take the ownership by overwriting the previous owners keys, and from there start signing the vendors key, they decide to put their trust in. Because it is important that the system is trustworthy to the end user/owner first.

Any anti-cheat mechanism relies on not trusting the person that owns the hardware, and why would that be good?


Maybe. Als long as it isn’t authoritarian, protects the weak from the strong and provides the people with the most personal positive freedom, security and safety, without infringing on the personal freedom of others, it should be fine. On the specifics we will have to work together on.

But we are leading off track. Currently I would say that the machinations of Russia, China, the U.S, Israel, and Iran are not good for the world. And in case authoritarian regimes, this can be traced back to the leaders of these countries, not the population. The fish stinks from the head. So saying this or that country is bad, doesn’t mean the population should suffer.


IDK, I couldn’t think of what kind of repercussions that would have… That would really depend on how it is dissolved and what would come after it, I suppose.

But TBH, with the right handling, organization and proper new structure, every nation should be dissolved… Cosmopolitanism would have its perks.


TBH, Russia is a country and does not equal Russians citizen. Saying that a country or its existence is bad, doesn’t mean all people with citizenship of that county are bad. People will still be there when a country is dissolved.


That isn’t really saying that much. It could still be a creation engine that has a UE5 renderer on top. Like the Oblivion remaster.


Security through layers. The flaws found here are about compromised server, so hosting your own server is a good first step. Next step is making the server only accessible via your own VPN. And of course hardening the server.


This is what all the listed password manager claim.

What was done here was tricking the client through the server to do things. So the fixes went into the client application.


TBF, people have been programming with XML and JSON, YAML and so forth for a while. From XSLT to Ansible.


Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP?

Because they where asked. They don’t need to spend time, if they don’t want to. They can just say: “I don’t know.” or if they want to be more helpful “I don’t know, but I can ask a LLM, if you’d like”. If someone answers they should at least have something to say.

If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

No, its not better. They where asked, not the LLM.

Do you always think like that? Like… If some newspaper start printing LLM generated slop news articles. Would you say, “It is the responsibility of the reader to research if anything in it is true or not?” No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

People that assert something should be able to defend it. Otherwise we are in a post-fact/post-truth era.

Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

That really depends on the history and relationship between them. We don’t know that, so I will not assume anything. They could have had previous talks where they stated that they don’t like LLM generated replies. But this is beside the point.

All I assumed is that they likely have not agreed to an LLM reply beforehand.

It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

No, this is a false comparison. If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.

Well… No… My point is that LLMs (or your agents you are going on about, which are just LLMs, that where able to populate their context with content or random internet searches) are making research generally more difficult. But it is still possible. Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don’t just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

But I do hope that people will become more aware of these issues, and learn the limits of LLMs. So that they know they cannot rely on them. I really wish that copy&pasting LLM generated content without validating and correcting its output will stop.


Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

Yes, I’ve use chatbots. And yes, I know that they always manage to generate answer full of conviction even while wrong. I never said otherwise.

My point is about the person using a chatbot/LLM needs to be able to easily verify if a generated reply is right or wrong, otherwise it doesn’t make much sense using LLMs, because they could have just researched the answer directly instead.


No! Why should the one receiving a generated answer when asking a person spend more effort validating it, then the person that asked the LLM the answer? They could have just replied: “I don’t know. But I can ask what some LLM can generate, if you like.”

Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

And sure, the LLM can generate lots of different sources. Even more than exist, or references to other sites, that where generated by LLMs or written by human authors that used LLMs, or researchers that wrote their papers with LLMs, or refer to other authors that used LLMs and so on and so forth.

The LLM or those LLM ‘agents’ cannot go outside, and do experiments, interview witnesses or do proper research. All they do is look for hearsay and confidently generate a string of nice sounding and maybe even convincing words full of conviction.


From the context, it seems to be a question which answer isn’t easily verifiably. So it is a question unsuitable for LLMs. Because verifying that answer would take more work, then researching and answering it yourself.

People using AI should know that. So making fun on them is fine.

That the discussion devolved into thinking wherever or not a certain LLMs first, second or third anwer is wrong, true or truer, or if the prompt needs to be modified, or wherever or not another model is superior in answering these type of questions is exactly that useless and distracting discussion that prevents people from talking about the answer to the question at hand, and the reason why people get less efficient when using LLMs.


I would agree, but if cars can just drive straight, why should bikes be slowed down?

Fast road bike drivers will then just drive in the road instead, because there they can just keep their speed unhindered.

If bike lanes have a worse experience then driving on the road, for instance sharp curves, steeper hills, worse maintained asphalt or less optimal ways to turn into a side road, then bicycle drivers will want to continue to use the road. Because they are treated as a second class traffic participant.

Cars instead should be treated as a second-class vehicle, because it requires more space, infrastructure and is less efficient.


Well, it isn’t open hardware. It’s license isn’t OSH compliant.


No? This isn’t a open source printer, its license isn’t at least OSI compliant and also not FSF compatible.


And the everything else. Since it isn’t under a open source license.


Pleading guilty would have at least demonstrated some remorse.

I’d rather forgive someone that said “Sorry, I did something bad, and learn from it” then someone that just first say “I didn’t do it.” and then when informed that they aren’t going to be punished say “Whatever.” and walk away.

So, she not pleading guilty and still not getting punished, while at fault is more rediculus to me.


But she didn’t plead guilty, according to the article. She pleded ’no-contest’ so she didn’t admit guilt, just stopped defending herself after she heard that she isn’t going to be punished anyway.

She just walked away after the judge said she can do that without consequences.


TBH, paying for every RGB combination would have been a bit funnier, as the ridiculous next step of paying for retextures.

Letting people pay for every change is just too lazy and uninspired…


In case of illegal numbers, intention matters. Because any number could be converted to different numbers, for instance through ‘xor encryption’ different ‘encoding’ or other mathematical operations, which would equally be illegal if used with the intention to copy copyright protected material.

This was the case previously. You cannot simply reencode a video, a big number on your disk, with a different codec into another number in order to circumvent copyright.

However, if big business now argues that copyright protected work encoded in neuronal network models is not violating copyright and generated work has no protection, then this previous rule isn’t true anymore. And we can strip copyright from everything using that ‘hack’.


Posts by cmhe, cmhe@lemmy.world

Comments by cmhe, cmhe@lemmy.world

That would actually be the wrong thing to want. In an ideal system trust would always begin by the owner of the hardware, where possible, not the software or vendor they decide to trust.

First the person that bought the system should take the ownership by overwriting the previous owners keys, and from there start signing the vendors key, they decide to put their trust in. Because it is important that the system is trustworthy to the end user/owner first.

Any anti-cheat mechanism relies on not trusting the person that owns the hardware, and why would that be good?


Maybe. Als long as it isn’t authoritarian, protects the weak from the strong and provides the people with the most personal positive freedom, security and safety, without infringing on the personal freedom of others, it should be fine. On the specifics we will have to work together on.

But we are leading off track. Currently I would say that the machinations of Russia, China, the U.S, Israel, and Iran are not good for the world. And in case authoritarian regimes, this can be traced back to the leaders of these countries, not the population. The fish stinks from the head. So saying this or that country is bad, doesn’t mean the population should suffer.


IDK, I couldn’t think of what kind of repercussions that would have… That would really depend on how it is dissolved and what would come after it, I suppose.

But TBH, with the right handling, organization and proper new structure, every nation should be dissolved… Cosmopolitanism would have its perks.


TBH, Russia is a country and does not equal Russians citizen. Saying that a country or its existence is bad, doesn’t mean all people with citizenship of that county are bad. People will still be there when a country is dissolved.


That isn’t really saying that much. It could still be a creation engine that has a UE5 renderer on top. Like the Oblivion remaster.


Security through layers. The flaws found here are about compromised server, so hosting your own server is a good first step. Next step is making the server only accessible via your own VPN. And of course hardening the server.


This is what all the listed password manager claim.

What was done here was tricking the client through the server to do things. So the fixes went into the client application.


TBF, people have been programming with XML and JSON, YAML and so forth for a while. From XSLT to Ansible.


Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP?

Because they where asked. They don’t need to spend time, if they don’t want to. They can just say: “I don’t know.” or if they want to be more helpful “I don’t know, but I can ask a LLM, if you’d like”. If someone answers they should at least have something to say.

If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

No, its not better. They where asked, not the LLM.

Do you always think like that? Like… If some newspaper start printing LLM generated slop news articles. Would you say, “It is the responsibility of the reader to research if anything in it is true or not?” No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

People that assert something should be able to defend it. Otherwise we are in a post-fact/post-truth era.

Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

That really depends on the history and relationship between them. We don’t know that, so I will not assume anything. They could have had previous talks where they stated that they don’t like LLM generated replies. But this is beside the point.

All I assumed is that they likely have not agreed to an LLM reply beforehand.

It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

No, this is a false comparison. If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.

Well… No… My point is that LLMs (or your agents you are going on about, which are just LLMs, that where able to populate their context with content or random internet searches) are making research generally more difficult. But it is still possible. Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don’t just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

But I do hope that people will become more aware of these issues, and learn the limits of LLMs. So that they know they cannot rely on them. I really wish that copy&pasting LLM generated content without validating and correcting its output will stop.


Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

Yes, I’ve use chatbots. And yes, I know that they always manage to generate answer full of conviction even while wrong. I never said otherwise.

My point is about the person using a chatbot/LLM needs to be able to easily verify if a generated reply is right or wrong, otherwise it doesn’t make much sense using LLMs, because they could have just researched the answer directly instead.


No! Why should the one receiving a generated answer when asking a person spend more effort validating it, then the person that asked the LLM the answer? They could have just replied: “I don’t know. But I can ask what some LLM can generate, if you like.”

Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

And sure, the LLM can generate lots of different sources. Even more than exist, or references to other sites, that where generated by LLMs or written by human authors that used LLMs, or researchers that wrote their papers with LLMs, or refer to other authors that used LLMs and so on and so forth.

The LLM or those LLM ‘agents’ cannot go outside, and do experiments, interview witnesses or do proper research. All they do is look for hearsay and confidently generate a string of nice sounding and maybe even convincing words full of conviction.


From the context, it seems to be a question which answer isn’t easily verifiably. So it is a question unsuitable for LLMs. Because verifying that answer would take more work, then researching and answering it yourself.

People using AI should know that. So making fun on them is fine.

That the discussion devolved into thinking wherever or not a certain LLMs first, second or third anwer is wrong, true or truer, or if the prompt needs to be modified, or wherever or not another model is superior in answering these type of questions is exactly that useless and distracting discussion that prevents people from talking about the answer to the question at hand, and the reason why people get less efficient when using LLMs.


I would agree, but if cars can just drive straight, why should bikes be slowed down?

Fast road bike drivers will then just drive in the road instead, because there they can just keep their speed unhindered.

If bike lanes have a worse experience then driving on the road, for instance sharp curves, steeper hills, worse maintained asphalt or less optimal ways to turn into a side road, then bicycle drivers will want to continue to use the road. Because they are treated as a second class traffic participant.

Cars instead should be treated as a second-class vehicle, because it requires more space, infrastructure and is less efficient.


Well, it isn’t open hardware. It’s license isn’t OSH compliant.


No? This isn’t a open source printer, its license isn’t at least OSI compliant and also not FSF compatible.


And the everything else. Since it isn’t under a open source license.


Pleading guilty would have at least demonstrated some remorse.

I’d rather forgive someone that said “Sorry, I did something bad, and learn from it” then someone that just first say “I didn’t do it.” and then when informed that they aren’t going to be punished say “Whatever.” and walk away.

So, she not pleading guilty and still not getting punished, while at fault is more rediculus to me.


But she didn’t plead guilty, according to the article. She pleded ’no-contest’ so she didn’t admit guilt, just stopped defending herself after she heard that she isn’t going to be punished anyway.

She just walked away after the judge said she can do that without consequences.


TBH, paying for every RGB combination would have been a bit funnier, as the ridiculous next step of paying for retextures.

Letting people pay for every change is just too lazy and uninspired…


In case of illegal numbers, intention matters. Because any number could be converted to different numbers, for instance through ‘xor encryption’ different ‘encoding’ or other mathematical operations, which would equally be illegal if used with the intention to copy copyright protected material.

This was the case previously. You cannot simply reencode a video, a big number on your disk, with a different codec into another number in order to circumvent copyright.

However, if big business now argues that copyright protected work encoded in neuronal network models is not violating copyright and generated work has no protection, then this previous rule isn’t true anymore. And we can strip copyright from everything using that ‘hack’.