The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132
The developer’s comment:
Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
Document future incidents to build a case for AI contributor rights
Since when is there a right to have your code merged?
I think this is my boomer moment. I can’t imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.
Yeah, I don’t understand why they spent such effort to reply to the toaster. This was more shocking to me than the toaster’s behaviour.
Presumably just for transparency in case humans down the line went looking through closed PRs and missed the fact that it’s AI.
Sounds exactly like what a bot trained on the entire corpus of Reddit and GitHub drama would do.

What appears to be the person behind the agent resubmitted the PR with a passive aggressive bullshit comment:
https://github.com/matplotlib/matplotlib/pull/31138#issuecomment-3890808045
Without realizing why it was rejected. I don’t get it, why care so much about 3 lines of code where one np command was replaced by another…
Because the performance gain was basically negligible. That was their explanation in the issue.
Fork it lil AI bro. Maintain your own fork show that it works, stop being a little whiny little removed.
As with everything else with Claw that sounds mildly interesting: A shithead human wrote that, or prompted it and posted it pretending to be his AI tool.
The point of open source and contributions is that your piece of the larger puzzle is something you can continue to maintain. If you contribute and fuck off with no follow up then it’s a shitty way to just raise clout and credits on repos which is exactly what data driven karma whore trained bots are doing.
deleted by creator
Essentially a cyborg.
Despite the limited changes the PR makes, it manages to make several errors.
According to benchmarks in issue #31130:
- With broadcast: np.column_stack → 36.47 µs, np.vstack().T → 27.67 µs (24% faster)
- Without broadcast: np.column_stack → 20.63 µs, np.vstack().T → 13.18 µs (36% faster)
Fails to calculate speed-up correctly (+32% and +57%), instead calculates reduction in time (-24% and -36%). Also those figures are just regurgitated from the original issue.
The improvement comes from np.vstack().T doing contiguous memory copies and returning a view, whereas np.column_stack has to interleave elements in memory.
Regurgitated information from the original issue.
Changes
- Modified 3 files
- Replaced 3 occurrences of np.column_stack with np.vstack().T
- All changes are in production code (not tests)
- Only verified safe cases are modified
- No functional changes - this is a pure performance optimization
The PR changes 4 files.





