Auf YouTube findest du die angesagtesten Videos und Tracks. Auรerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies donโt have the expertise, resources or training data to build a proper model for it.
Machine Learning is really good at CLAIMING it detected that.
The reality is that every few months there is a story about a fairly big streamer/e-sports player MAYBE getting caught cheating on stream. Sometimes it is obvious and sometimes it really becomes โDid they just know the map well enough to expect someone to come around that corner?โ.
And a lot of timesโฆ it really is inconclusive. A somewhat common trope in movies is the veteran gunslinger literally aims at the wall of a stairwell and tracks where they expect the head to be and either fires a few rounds through the wall or waits for them at the bottom andโฆ that is not entirely inconceivable considering that people tend to not crouch or move erratically down stairs. Obviously Jonathan Banks has a wallhack but Mike Ehrmantraut is just that damned good.
And false positives are a great way to basically kill a game. ESPECIALLY if they are associated with demonstrably false negatives too.
But you can be damned sure most of the major esports games are already doing this. It really isnโt expensive to train and they have direct feeds of every player in a tournament or twitch event. The issue is that there are (hopefully) tens of thousands of servers active at any moment and running Computer Vision+Inference on every single server is very costly.
Andโฆ I seem to recall there was a recent intentionally poorly defined Movement about maybe keeping user hostable dedicated servers a thing? How does that mesh with having every single server need to phone home (a fraction of) all 32 players feeds to a centralized cluster?
Machine learning doesnโt necessarily require a centralized cluster. Usually running those kinds of models is pretty cheap, itโs not an LLM basically. They usually do better than human moderators as well, able to pick up on very minute โtellsโ these cheats have.
I understand your point about edge cases, but thatโs not something the average player cares about much. E-sports is a pretty niche part of any game, especially the higher ranks. You just want to filter out the hackers shooting everyone each game that truly ruin the enjoyment. Someone cheating to rank gold instead of silver or whatever isnโt ruining game experiences; theyโre usually detectable too, but if you get a false negative on that itโs not the end of the world. A smurf account of a very highly ranked player probably has a bigger impact on playersโ enjoyment.
Depending on the model, inference can be run with CPU only. To distinguish what was originally proposed (a momentary flick consistent with aimbotting), you are either doing ray tracing (really expensive) or analyzing (effectively) video feeds. Both of which tend to put things more into the GPU realm which drastically increases the cost of a server.
But also? The only way these models can work is with constant data. Which means piping feeds back home for training which basically is never inexpensive.
Aside from that: if it was as simple as you are suggesting then this would be a solved problem. Similarly, if people donโt care about hackers outside of e-sports then there would be no reason for games to spend money on anti-cheat solutions when any match that matters would have heavy scrutiny. And yet, studios keep pumping out the cash for EAC and the like.
CSGO used to have Overwatch which is an anti cheat system that uses trusted and experienced players to go through video footage of reported players. With this method I both reported blatant spinbotters, wall hacking, and other chears. I also was on the side of watching back footage of hacking players.
Say AI trains on this data, it might work.
Iโm not a fan of this though because knowledgeable and experienced players will be better than AI.
What actually exists but what I have yet to see implemented in any game I play are those server-side โAI anti-cheatโ solutions like from anybrain that basically just analyse the players behavior to fit certain criteria. According to areweanticheatyet.com though there are four games using it already (the most well-known one probably being Lost Ark). In theory ai models can be very efficient and accurate at this (we are not talking about transformer models here like with the current llm craze) but that all depends on how they train a model and what the training data looks like.
Since human beings are hot garbage and will always cheat, I really enjoyed playing against the AI soldiers in BF. It can also ensure that the game is playable forever OFFLINE.
Where I live I cant play BF4 anymore. Servers are down for my country, but I paid money for the game. Digital media is a scam once the servers go down. That is why I jailbroke every console I own.
Ppl are already reviving BF2 with AI bots.
The future is looking bright.
Only AI will be able to root this out in future
Thatโs an (obviously) unpopular opinion around here but Iโll give you the benefit of the doubt: How would AI be able to do that?
deleted by creator
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies donโt have the expertise, resources or training data to build a proper model for it.
Machine Learning is really good at CLAIMING it detected that.
The reality is that every few months there is a story about a fairly big streamer/e-sports player MAYBE getting caught cheating on stream. Sometimes it is obvious and sometimes it really becomes โDid they just know the map well enough to expect someone to come around that corner?โ.
And a lot of timesโฆ it really is inconclusive. A somewhat common trope in movies is the veteran gunslinger literally aims at the wall of a stairwell and tracks where they expect the head to be and either fires a few rounds through the wall or waits for them at the bottom andโฆ that is not entirely inconceivable considering that people tend to not crouch or move erratically down stairs. Obviously Jonathan Banks has a wallhack but Mike Ehrmantraut is just that damned good.
And false positives are a great way to basically kill a game. ESPECIALLY if they are associated with demonstrably false negatives too.
But you can be damned sure most of the major esports games are already doing this. It really isnโt expensive to train and they have direct feeds of every player in a tournament or twitch event. The issue is that there are (hopefully) tens of thousands of servers active at any moment and running Computer Vision+Inference on every single server is very costly.
Andโฆ I seem to recall there was a recent intentionally poorly defined Movement about maybe keeping user hostable dedicated servers a thing? How does that mesh with having every single server need to phone home (a fraction of) all 32 players feeds to a centralized cluster?
Machine learning doesnโt necessarily require a centralized cluster. Usually running those kinds of models is pretty cheap, itโs not an LLM basically. They usually do better than human moderators as well, able to pick up on very minute โtellsโ these cheats have.
I understand your point about edge cases, but thatโs not something the average player cares about much. E-sports is a pretty niche part of any game, especially the higher ranks. You just want to filter out the hackers shooting everyone each game that truly ruin the enjoyment. Someone cheating to rank gold instead of silver or whatever isnโt ruining game experiences; theyโre usually detectable too, but if you get a false negative on that itโs not the end of the world. A smurf account of a very highly ranked player probably has a bigger impact on playersโ enjoyment.
Depending on the model, inference can be run with CPU only. To distinguish what was originally proposed (a momentary flick consistent with aimbotting), you are either doing ray tracing (really expensive) or analyzing (effectively) video feeds. Both of which tend to put things more into the GPU realm which drastically increases the cost of a server.
But also? The only way these models can work is with constant data. Which means piping feeds back home for training which basically is never inexpensive.
Aside from that: if it was as simple as you are suggesting then this would be a solved problem. Similarly, if people donโt care about hackers outside of e-sports then there would be no reason for games to spend money on anti-cheat solutions when any match that matters would have heavy scrutiny. And yet, studios keep pumping out the cash for EAC and the like.
CSGO used to have Overwatch which is an anti cheat system that uses trusted and experienced players to go through video footage of reported players. With this method I both reported blatant spinbotters, wall hacking, and other chears. I also was on the side of watching back footage of hacking players.
Say AI trains on this data, it might work.
Iโm not a fan of this though because knowledgeable and experienced players will be better than AI.
What actually exists but what I have yet to see implemented in any game I play are those server-side โAI anti-cheatโ solutions like from anybrain that basically just analyse the players behavior to fit certain criteria. According to areweanticheatyet.com though there are four games using it already (the most well-known one probably being Lost Ark). In theory ai models can be very efficient and accurate at this (we are not talking about transformer models here like with the current llm craze) but that all depends on how they train a model and what the training data looks like.
Donโt waste your time, theyโre either an hardcore AI bootlicker or a shit stirrer - most likely both, looking at their post history.
Since human beings are hot garbage and will always cheat, I really enjoyed playing against the AI soldiers in BF. It can also ensure that the game is playable forever OFFLINE.
Where I live I cant play BF4 anymore. Servers are down for my country, but I paid money for the game. Digital media is a scam once the servers go down. That is why I jailbroke every console I own. Ppl are already reviving BF2 with AI bots. The future is looking bright.
The same way itโs automodderating Reddit to a point that nobody can post anything anymore LOL
Pattern recognition anticheat vs. bot based on patterns? I donโt see anyone winning.
Keep that AI horseshit out of video games, thanks.