You lose reach when you’re swordfighting if you have the high ground, and reach is a much more significant advantage. It’s also far easier to guard your head than it is to guard your ankles. Given the choice between the two, I’d take the low ground.
This changes significantly with mixed arms. If you have spears, you want the high ground. If you have bows, you REALLY want the high ground. If you’re duelling someone, sword to sword, you do NOT want the high ground. And this is the voice of personal experience.
When you’re duelling, what you actually want is the sun at your back, and solid ground behind you - and for your opponent to be on uneven ground, without good options to back up safely, and the sun in their eyes.














If this were true, then open source projects would have much less of an issue with pull requests from sloperators.
I wouldn’t expect to see it. Satirical code requires more thought than an LLM is capable of putting into its writing - you need to understand what is expected of whoever you’re satirizing, and then you have to take that expectation and take it a step further into the absurd. Without having that context of something that is specifically being satirized, what you have instead is just incorrect code. And again, the LLM is incapable of valuing proper code over intentionally wrong code, so it’s going to poison the database to some extent.
And LLMs don’t drop big chunks of copy-pasted code from Stack Exchange like an intern would. They work one token at a time. (Which is why trying to get them to understand that quotations need to be all in one piece is a futile endeavor.)
Besides, ‘satirical code’ is just one example of the many things that can poison the training. I couldn’t even begin to enumerate all the things that could mess with it, and honestly I’m surprised that LLMs do as well as they do considering they likely have all sorts of cross-language screwball connections (which may be why it has such a tendency to make up libraries; it doesn’t necessarily understand that a common PHP library doesn’t exist in Java).
These issues could be caught by someone whose job it is to audit code, sure. The problem is that sloperators often don’t audit their own stuff well enough. They leave it to the open source repo’s admins. When pull requests from overeager noobs were infrequent, it wasn’t the problem; they could gently correct them, the repo would stay high-quality, the noob would learn, everyone is fine. But now, sloperators are dumping low-quality pull requests on the repos faster than the admins can sort through them - because it now takes less time to produce slop code than it takes to determine whether or not the slop is worth including. The admins are swamped, because they can’t sort the wheat from the chaff fast enough.
A domain-limited AI designed to check output would be useful - if it could be trusted. Open-source project admins are some of the best coders out there, and they vastly outstrip the capabilities of LLMs. You’re suggesting that we replace THEM with an agent. They are in that position because they’re right far more often than they’re wrong when it comes to understanding the code as it exists, and how incoming code would impact it - or at least they’re right often enough to keep the project alive. LLMs will be worse at that job, I guarantee it. They’d be fast, but they’d be wrong too often. This is the primary issue with LLM agents.