

We need to work overtime to ensure the term “AI coworker” can never be uttered in earnest and will be endlessly mocked forever.
Grand Poobah of the Human Web Collective


We need to work overtime to ensure the term “AI coworker” can never be uttered in earnest and will be endlessly mocked forever.


It would be still disturbing, but somewhat understandable, if OpenAI hadn’t had any human in the loop or flagged anything as suspicious regarding Jesse Van Rootselaar’s interactions with ChatGPT, and we’re only learning about those interactions after the fact.
But this is much much worse! They knew something was afoot! They knew this person was going through some sort of crisis! And yeah, they eventually banned her account and presumably this was a while before the horrible event on Feb 10, but it’s mindboggling to me that “about a dozen staffers debated whether to take action on Van Rootselaar’s posts…OpenAI leaders ultimately decided not to contact authorities.”
!!!
I would start using far stronger language at this point, but it’d all have to be bleeped out in polite company so I’ll leave that to your imagination. 🤬


Did Microslop fire every single one of their UX Designers or something?? WTF 😵💫


“…and is the AI superintelligence in the room with us right now?” 😆
This is fantastic!


“The End of UX”, as I’ve been calling it, continues apace. 😭


Simulating an existing (or previously existing) actual human being should be illegal. It’s quite literally identity theft.


Poppycock. The reason people watch a movie or TV is because they want to see real people doing real-world things that emotionally move them. And even in the cases where it’s a 3D-animated character or all animation, we can feel the humanity of the artistry and the creative talent behind the artifact.
There’s actually no proof anyone wants to “watch slop.” On the contrary, every time a slop advertisement or short narrative video is posted online, it is roasted into oblivion. The market just isn’t there.


AI doesn’t “use” reason, it uses statistics. Ascribing any sort of meaningful opinion or agency to it is scientifically inaccurate. Getting angry at AI for anything it “says” is like getting angry at your calculator, or alarm clock.


Oh good it’s not just me. Every damn time I read any of these nutty announcements for new sparkle buttons…welp it’s the Herlihy Boy. 😆


Stop making “AI can replace humans” happen. It’s not gonna happen!


You know, I don’t think people want to watch an “experiment”. They want to watch art. If your “art” looks like hot garbage, go experiment on your own time and leave the rest of us alone! 🫠


yeah, I meant the “old school” doomers who thought the AI would start to replicate and upgrade itself and turn into Skynet basically and humans would be helpless to stop it.
Now the likely doom is just Elon Musk running the planet and turning forests into data centers for 3D waifus. 🙃


That was my concern at first, wondering if they’d been turned into a wild-eyed doomer from drinking too much of the Kool-Aid on the negative side…but my own conclusion is they sound reasonably level-headed and likely had an “Are We the Baddies?” awakening of some kind. I also would agree AI isn’t the only major “problem” facing the world, it’s merely part of a cluster of interconnected issues and I appreciated his acknowledgment of that.


I’m tired of arguing with you about this, and you’re still wrong. It was opt-out, not opt-in, based initially on a GitHub crawl of 137M repos and 52B files before filtering & dedup.


Apertus is most certainly trained on source code hosted on GitHub. It is laid out here in their technical report:
https://github.com/swiss-ai/apertus-tech-report
It uses a large dataset called TheStack, among others.


It is still trained on open source code on GitHub. These code communities seemingly have no way to opt out of their free (libre) contributions being used as training data, nor does the resulting code generation contribute anything back to those communities. It is a form of license stripping. That’s just one issue.
Just because your inference running locally doesn’t use much electricity doesn’t mean you’ve sidestepped all of the other ethical issues surrounding LLMs.
I think you stumbled into the wrong forum… we’re very used to drive-by “you’re holding it wrong” claims which never hold up to scrutiny.