• 26 Posts
  • 697 Comments
Joined 1 year ago
cake
Cake day: January 30th, 2025

help-circle

  • If some newspaper start printing LLM generated slop news articles. Would you say, “It is the responsibility of the reader to research if anything in it is true or not?” No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

    As long as the newspaper clearly says it’s written by an LLM that’s fine with me. I can either completely ignore it or take it with a grain of salt. Truth is built on trust but trust should be a spectrum, you should never fully believe or fully dismiss something based on its source. There are some sources you can trust more than others, but there should always be some doubt. I have a fair amount of trust in LLMs because in my experience most of the time they are correct, I’d trust them more than something printed in Breitbart but less than something printed in the New York Times, but even with the new York times I watch out for anything that seems off.

    You, along with most of this sub, seem to have zero trust in LLMs, which is fine, believe what you want. I’m not going to argue with you on that because I’m not going to be able to change your mind just as you won’t be able to change Trump’s mind on the new York times. I just want you to know that there are people who do trust LLMs and do think their responses are valuable and can be true.

    If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

    I don’t think this is universal, that may be your expectation, but assuming it’s not something private or sensitive I’d be fine with my friend asking a third party. Like if I texted in a group chat that I’m having car troubles and asked if anyone knows what’s wrong I would not be offended if one of my friends texted back that they’re uncles a mechanic and said to try x. I would be offended if that person lied about it coming from their uncle or lied about their uncle being a mechanic, but in this case the person was very clear about the source of the information they got and it’s “credentials”. Part of the reason I may be asking someone something is if they don’t know the answer they may know someone who knows the answer and forward it on to them.

    Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don’t just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

    I don’t think this is true for every person, maybe for experts, but an AI agent is probably just as good as a layman on doing online research. Yes if you can ask an expert in the field to do the research for you they will be better then an AI agent but that’s rarely an option, most of the time it’s going to be you by yourself, or if your lucky a friend with some general knowledge of the area googling something and looking through the top 3-5 links and using those to synthesize an answer. An AI agent can do that just as well and may have more “knowledge” of the area than the person. Like chatgpt knows more about say the country of Bhutan then your average person, probably not as much as a Bhutanese person, but you probably don’t know a Bhutanese person and can’t ask them the question. It can even research the sources themselves or use a tool that rates the trustworthiness of a source to inform which one is true in a contradiction.






  • Why should the one receiving a generated answer when asking a person spend more effort validating it

    Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP? If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

    Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

    Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

    It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

    Meanwhile lying to someone to fuck with them and making stuff up is universally known to be disrespectful. OP was intentionally disrespectful, the other person may not have been.

    The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.



  • Because verifying that answer would take more work, then researching and answering it yourself.

    Not necessarily, again it depends on whether it’s right or wrong. If it’s right it can give you a lead to research into, if it’s wrong then you’re just wasting your time following a dead end.

    You can also just ask the AI to give it’s sources. If the AI is agentic and not just a simple LLM then it is already probably doing a web search and “reading” articles and papers on the topic to synthesize the answer, if it doesn’t give those links with the output you can usually just ask it and it will embed them in the output, making the verification part easier as you can just read them yourself.

    that the discussion devolved …

    Yeah but it seems OP was the one that devolved it. They could’ve just said that they think the AI is wrong for x reasons and continued the discussion but instead they made up a fake model with a fake answer which inevitably leads to a discussion of the different models contradicting each other and why that is. And again this is for a contradiction that probably doesn’t exist.

    This would be like if someone said their doctor told them to do something and I lied and said my doctor told me to do the opposite and the discussion turned to which doctor was more trustworthy. Then saying that even though I lied, the fact that we wasted our time discussing the trustworthiness of doctors means we shouldn’t consult them at all as it takes too much time arguing over my made up scenarios.







  • Not_mikey@lemmy.dbzer0.comtopics@lemmy.worldAerial view of San Francisco
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Meh, it was mostly just sand dunes, at least for San Francisco. There’s probably more trees there now then before it was developed. Also San Francisco probably has the most natural area surrounding it then any other major city in the US, since most of the area around it is either mountains or water which you can’t build on, that’s also why it’s so dense.





  • your thumb for Cmd + C vs using you’re pinky to do Ctrl + C is also terrible in my opinion.

    I just shift my hand down and use my ring finger to hold Cmd and my index for C, I assume you’re doing the same shift to do pinky on Ctrl, IDK why you would try and do Cmd with your thumb.

    Also Cmd + C is better for copying and pasting from a terminal then Ctrl + Shift + C for Linux, idk if it’s the same for windows, but it’s annoying having to context switch and use Ctrl + C on a web browser then Ctrl + Shift + V to paste it into a terminal.