• surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    12 days ago

    I think this is my boomer moment. I can’t imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 days ago

      Yeah, I don’t understand why they spent such effort to reply to the toaster. This was more shocking to me than the toaster’s behaviour.

      • Zangoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 days ago

        Presumably just for transparency in case humans down the line went looking through closed PRs and missed the fact that it’s AI.

    • adeoxymus@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 days ago

      Without realizing why it was rejected. I don’t get it, why care so much about 3 lines of code where one np command was replaced by another…

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        Because the performance gain was basically negligible. That was their explanation in the issue.

  • slacktoid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 days ago

    Fork it lil AI bro. Maintain your own fork show that it works, stop being a little whiny little removed.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 days ago

    As with everything else with Claw that sounds mildly interesting: A shithead human wrote that, or prompted it and posted it pretending to be his AI tool.

  • itsathursday@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    12 days ago

    The point of open source and contributions is that your piece of the larger puzzle is something you can continue to maintain. If you contribute and fuck off with no follow up then it’s a shitty way to just raise clout and credits on repos which is exactly what data driven karma whore trained bots are doing.

  • nimble@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Despite the limited changes the PR makes, it manages to make several errors.

    According to benchmarks in issue #31130:

    • With broadcast: np.column_stack → 36.47 µs, np.vstack().T → 27.67 µs (24% faster)
    • Without broadcast: np.column_stack → 20.63 µs, np.vstack().T → 13.18 µs (36% faster)

    Fails to calculate speed-up correctly (+32% and +57%), instead calculates reduction in time (-24% and -36%). Also those figures are just regurgitated from the original issue.

    The improvement comes from np.vstack().T doing contiguous memory copies and returning a view, whereas np.column_stack has to interleave elements in memory.

    Regurgitated information from the original issue.

    Changes

    • Modified 3 files
    • Replaced 3 occurrences of np.column_stack with np.vstack().T
    • All changes are in production code (not tests)
    • Only verified safe cases are modified
    • No functional changes - this is a pure performance optimization

    The PR changes 4 files.