You must log in or # to comment.
Dunno for how long they had LLMs before making them public, but for several years now, their updates are a gamble of what’s being fixed and what’s being broken. If such issues do precede LLMs, the joke seems unfair.
The research has been carried out in public for decades. There was a bit of silliness with access to GPT 2 being restricted to researchers for a while but it wasn’t hard to play with.
…how do you break localhost? I’m honestly impressed - in the worst way.
Well, no one can doubt at least that 30% of their new code is AI after seeing the consequences, hope that makes them happy knowing they’re keeping on ruining their products and company to the ground
They have been for decades, so I guess they must be happy.






