Clearly the whole drama with Pentagon making a big deal of showing that they’re trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured.

Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. All the media has been running the story of a plucky Anthropic defying US military to defend ethical AI and protect humanity.

  • venusaur
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    3 days ago

    All kinda of stuff! Coding, automation, research. It’s a tool just like anything. If you went on Google and just read the headlines of search results you’d be pretty dumb. Arrogance and virtue signaling is just as bad as using GPT’s.

    • TheOubliette@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Emphasis on consistently.

      Coding: AI slop gives devs undue confidence to introduce glaring bugs and security holes and unmaintainable structures as they are not accustomed to doing proper code reviews (which is now their role - reviewing bad junior dev code). It works great at first, seemingly, and then racks up a massive cost later in the form of fixing its problems. Of course, you can just not fix those problems and live with terrible security and constantly rewriting half the codebase to try and imolement a single feature. LLMs can reproduce patterns but can’t really think. You will end up spending just as much time, if not more, building something half decent using it, but then likely end up not properly understanding what was built. And God help you if you want to implement using version 4.3 of some library rather than he much more publicly documented version 3.x.

      Automation: I dunno the only irl examples I have seen of automation have been catastrophes because the person trusted a broken implementation. They were real excited at first and then had a bad time a couple months later. But I’m sure there are examples of this where “good enough” meshes reasonably well with the capabilities of LLMs.

      Research: Oh I strongly discourage this. These are pattern regurgitation machines, they will reproduce what is common and that is not the same as what is true, and that is before accounting for “hallucinations”, which is really just more pattern-making, it is the same as the non-hallucinatioms but just more obviously wrong rather than subtly wrong. This is a surefire way to unlearn how to do good research and adopt false ideas without even knowing it.

      Re: reading and believing headlines: yes that will also lead you astray. Doesn’t make the lie regurgitation machine a good idea for most topics.

      Re: “Arrogance and virtue signaling” I have absolutely no idea what you are referring to.

      • venusaur
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        edit-2
        1 day ago

        These are all examples that you don’t know how to use [LLM]’s effectively. You’re not even trying. It’s a tool. It’s not a replacement brain.

        • TheOubliette@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago
          1. They’re LLMs. GPT is the name given to some of OpenAI’s models. Most LLMs that people use, including for generating software, are not based on GPT.

          2. My examples, and I explicitly wrote about them as such, are based on others’ experiences as well. Not to mention a fundamental understanding of how these systems work and why and how they fail. I am not exactly new to building or using large language models.

          3. I haven’t written about them as a “replacement brain” but doing something foolish like using them for research is exactly that. It’s like having an encyclopedia at your fingertips so you spend less effort in learning about various subjects, so you now don’t exercise part of your brain to actually understand materials. But worse: this encyclopedia lies and makes things up to match the prompts you give it (weighed against its model), to give the appearance of certainty, to best recapitulate your chat history. Though it can’t even lie because it doesn’t know anything, it just does pattern generation. Oh, and worse still, they are trained on the worst information for research (e.g. Reddit) and tuned for various false narratives depending on the topic. Rather than rely on them for research one should learn basic media criticism.

          Using them to generate code leads to the problems I described for the reasons I gave. You can germanely respond to that if you’d like.

          • venusaur
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            Thanks for your permission to reply to your comment your highness. You’re right, they’re LLM’s.

            There are absolutely effective ways to use it to write code or do other tasks, as long as you are not just plugging in whatever it gives you without any understanding of what it’s doing. You’re failing to even think about those possibilities. I don’t develop software. I write code for myself to complete automated tasks. I’m not programming missile launches. It’s completely safe and effective to use AI in many ways.

            You can absolutely use LLM’s for research. What do you think Google does? It provides you with options to learn from. An LLM derives answers from search results or pretrained knowledge and you have to opportunity to either run with that or use it effectively to dig in more and validate these answers.

            Your fight is not with AI. It’s with dumb people. LLM’s are just a tool. Think deeper at the systemic reasons why you are so against AI. It’s because people will misuse it.

            Please don’t respond. Thanks!

            • TheOubliette@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              Thanks for your permission to reply to your comment your highness.

              Well you see, the last reply didn’t respond germanely to what I had said. It instead made a baseless accusation. So I invited you to continue the conversation germanely. I can always just stop replying to you.

              You’re right, they’re LLM’s.

              LLMs, but sure close enough.

              There are absolutely effective ways to use it to write code or do other tasks, as long as you are not just plugging in whatever it gives you without any understanding of what it’s doing.

              “Effective” is vague. I have made more specific criticisms. Slop vibe coders think their work is effective, don’t they? Even as they leave critical endpoints exposed, unauthenticated.

              As I said, using these “tools” means someone is now in the position of doing code reviews of an incompetent junior dev that keeps making the same kinds of mistakes, some of which I listed. And yet those who seem most enamored with these tools actually can’t do that review, as those who can realize how much time is not saved by having to review the code of an incompetent junior dev that, for example, keeps doing rewrites rather than addressing a problem you pointed out directly.

              You’re failing to even think about those possibilities.

              I have already given examples and you have not replied to them.

              I don’t develop software. I write code for myself to complete automated tasks.

              There is no difference between those things. But sure you are saying you are the only user of your software. Nobody else has to suffer if something goes wrong, you don’t get fired, and maybe it isn’t exposed to the internet or your LAN so you don’t have to think about security (who knows?). At the same time if the work is to trivial the value of the tool itself also diminishes. Is it more than 500 lines of code? Does it need to be? How do you know it’s correct? Does it need to be correct? Most code is read many more times than it is written and the writing portion is more about thinking about and understanding the problem to solve. The time savings is not particularly high unless being used as a way for someone that doesn’t understand these things to make a tool that looks correct with a scant once over. It helps them because they couldn’t write the widget in the first place. The LLM might do the widget in 10 seconds and then you have to review it for 5-10 minutes. Writing the widget might take 5-10 minutes and then need almost zero review because you already did the review thinking as part of the process.

              I’m not programming missile launches. It’s completely safe and effective to use AI in many ways.

              If my criticisms, which have not been addressed, apply, then it’s neither.

              You can absolutely use LLM’s for research. What do you think Google does? It provides you with options to learn from.

              Google is ostensibly a search engine. It generates an ordered list of results ordered by "relevance’, where relevance used to be pagerank and the idea was that it would crawl pages, index content, tie them together, and give you the results that were most-linked and most tied to your search terms. So Google theoretically finds you relevant web pages. Of course, it is highly limited by the terms you use, what is accessible on the internet, and what their censorship teams allow you to see. I would never tell someone who wanted to actually learn about a topic to just Google it. They’re just as likely to come across an accurate resource as they are to use one that is wrong in a serious way, ranging from subtly (but importantly) wrong to blatantly ridiculous yet entirely believable by someone that just starts typing in terms to Google to learn something.

              Google does not provide you with options to learn from. It provides you its semi-curated list of websites in response to search terms, and every single one you see on the first page (the only one 99% of people see) may be bullshit.

              Does that mean it can’t be used for research? No. It can help you locate websites that do have good information, obviously. But it’s not a particularly good tool and LLMs are even worse for the reasons I’ve already described.

              An LLM derives answers from search results or pretrained knowledge

              Incorrect. An LLM constructs text streams based on its model(s) and inputs, the inputs being your prompt, generally the entire history of your “conversation” (hence it getting stuck on things it previously said even if you pointed out they are wrong), and whatever the devs decided to prepend to the inputs to make the LLM behave less poorly. They are not knowledge systems, they don’t think, and they don’t know things. The search results are indeed sometimes part of it, in that they are also added to the inputs.

              and you have to opportunity to either run with that or use it effectively to dig in more and validate these answers.

              It is not interesting or salient that you can decide to accept or reject what an LLM says. This applies regardless of whether my criticisms are true - criticisms you have not addressed.

              Your fight is not with AI. It’s with dumb people. LLM’s are just a tool.

              What I see is people making papier mache houses and telling me how cool it is that papier mache can build a house. It’s so fast and easy! And look, it’s house-shaped! Apparently my fight is with papier mache and not, say, the people who are telling me how great it is to build houses out of it. “It’s just a tool! You can’t be dumb about using it! Use it to build houses! Obviously you’ve just never used papier mache.”

              Please don’t respond. Thanks!

              So, you didn’t respond germanely to what I said and are resorting to bad faith due to your perception of condescension.

              I will likely not respond to you further. You don’t seem interested in engaging on this topic in good faith.

              • venusaur
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                Yikes. Honestly I’m not going to read this novel. Should probably put it through some LLM to make a more succinct response. There is beauty and intelligence in brevity.

                We’re not having a conversation worth having. You have absolutely made up your mind and are not open changing it. At the end of the day, we are both right. I think you have valid points and you probably do not agree with anything I’m saying so no point continuing.

    • trilobite@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Folks, is the debate about whether we sould use it or not, or is it about how to use it? The point really is that every time we use these tools, we are training the to become better. The question is rather how smart do we want these tools to become? Stick good AI on good robots and Terminator won’t be sci-fi in a few years … lol I’m already starting to develop my anti robot nuke system … :-)

      • Deacon
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        The problem is that - Already and forever more - the consumer and the AI have divergent definitions of “better”.

        AI is not being served to us in a neutral space, it is largely developed and controlled by companies that also control important Algorithms, and that is no coincidence.

        Use AI if you want to, but essentially You’re The Product and the cost is only the environment. Non-billionaires who think there is an actual value prop for them are basically concussed, they are so short sighted.

      • venusaur
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        A lot of people on here think their brains belong in a jar next to Einstein. These models are gonna be trained to be just as smart without you.