Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Mastodon post linking to the least shocking Ars lede I have seen in a bit. Apparently “reasoning” and “chain of thought” functionality might have been entirely marketing fluff? :shocked pikachu:
Wait, but if they lied about that… what else do they lie about?
I’m still thinking about the article about the NRx party from last week and just how classless (who pours champagne wrong?) and sad it showed them to be, while still being obsessed with their image. Such a sad bunch, their ideas have reached the higher ups of American power and still they obsess about how a journalist (who is dating one of them (he is into the ‘we live in a simulation’, break up with him, you are in danger)) might write something bad about them. (See also how many of these sad sacks got fired/blackballed for just having no internal filter (dressing up gay people as the KKK really?)). The creme de la creme of intellectual thought and they talk and act like a bunch of 4channers. (Yarvin must know this, his shit about how billionaires act must be a bit of projection). I’m talking about this piece: https://archive.ph/gm3Za Sorry to repost it, I just had a ‘layer 2 well done’ reminder and cringed again, fucking larpers (No shade to people who actually larp, seems fun, just cringe to do it irl).
The beautiful process of dialectics has taken place on the butterfly site, and we have reached a breakthrough in moral philosophy. Only a few more questions remain before we can finally declare ethics a solved problem. The most important among them is, when an omnipotent and omnibenevolent basilisk simulates Roko Mijic getting kicked in a nuts eternally by a girl with blue hair and piercings, would the girl be barefoot or wearing heavy, steel-toed boots? Which kind of footwear of lack thereof would optimize the utility generated?
The last conundrum of our time: of course steel capped work boots would hurt more but barefoot would allow faster (and therefore more) kicks.
You have not taken the lessons of the philosopher Piccolo to mind. You should wear even heavier boots in your day to day. Why do you think goths wear those huge heavy boots? For looks?
Her kick’s so fast the call it the “quad laser”
And thus I was enlightened
Forwarding this discussion to here:
News from r/philosophy: OOP, Richard Y Chappell, posted an article containing image slop, landing them a 3-day ban. OOP writes a new article that DESTROYS the r/philosophy moderation policy on AI generated content with FACTS and LOGIC. For added flavour, OOP is an EA. OP is an SSCer. Both are participants in the thread.
Guess either term hasn’t started, or his gig as phil prof is some sort of right-wing sinecure. Dude has a lot of time on his hands.
FWIW I’d say banning a poster for including slop image in a 3rd party article is a bit harsh, but what would Reddit be without arbitrary draconian rules? A normal person would note this, accept the 3 day ban, and maybe avoid the sub in future or avoid including slop. The fact he flew off his handle this much is very very funny though.
Forget being exposed to the elements to build character. People should be randomly temp banned and use that to build/judge character. (Also a good judge of the power balance in a community, if the mod team can temp ban a power poster that predates the mod team, say lesswrong giving Yud a timeout).
power poster that predates the mod team
Does Yud predate for food or sport?
he predates for iq points
every time i read what he writes i feel dumber from the contact
Fullsquares Basilisk. Either yud stops writing, or we commit to building a basilisk which forces 2^^^2 copies of fullsquare to read more Yud. And this does more damage than a dustspeck.
Sorry did I use the wrong word there? I sometimes mess up stuff like that due to not being a native speaker, and being bad at spelling/grammar in general.
No predate was fine. I was joking and using it as in “predator”
Ah now I get it. Lol, yes. Would be amusing of they banned Yud from lw/ssc events like they did to other predators. (and that is the bans we know of).
I’m reminded of the comedy/gaming stream that I watch that opens every episode with banning a random member of chat based on a spin of the wheel. It certainly lends the community a certain flavor, even if it is more “jingly keys” rather than “strong community.”
Now im wondering, where the people of the stream itself also included, because that is what I mean. The people with the power shouldn’t be excluded from the moderation or get special privileges. (See what Twitter did to protect Trumps account, and where we are now).
The on-camera duo are exempt for obvious reasons, but they’ve definitely hit at least one of their mods. Before The Wheel was implemented I seem to remember they even specifically targeted them sometimes for the joke.
Having one of the duo just step out would also be amusing. Esp if they full chaos and roll twice.
Not sure myself, but the mods are probably either excluded from being banned by The Wheel™, or unbanned immediately afterwards, just to keep things running smoothly.
Weak mods create bad times! Bust a deal, face the wheel!
Iris van-Rooij found AI slop in the wild (determining it as such by how it mangled a word’s definition) and went on find multiple other cases. She’s written a blog post about this, titled “AI slop and the destruction of knowledge”.
choice quote from Elsevier’s response:
Q. Have authors consented to these hyperlinks in their scientific articles? Yes, it is included on the signed agreement between the author and Elsevier.
Q. If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent? Yes, because you will need to sign an agreement with Elsevier.
consent, everyone!
UK Asks People to Delete Emails In Order to Save Water During Drought
The part of data centers using to much water is apparently old emails.
It gets worse, as the advisory doesn’t even mention to delete emails/pictures from the cloud, so the people who are likely to listen to these kinds of advices are also the people who are the least likely to understand why this is a bad idea and will delete their local stuff. (And that is ignoring that opening your email/gallery to delete stuff costs more than keeping it in storage where it isn’t accessed).
"HOW TO SAVE WATER AT HOME
- Install a rain butt [hehehe] to collect rainwater to use in the garden.
… [other advice removed] - Delete old emails and pictures as data centres require vast amounts of water to cool their systems."
- Install a rain butt [hehehe] to collect rainwater to use in the garden.
lol, lmao: as if any cloud service had any intention at all of actually deleting data instead of tombstoning it for arbitrary lengths of time. (And that’s the least stupid factor in this whole scheme; is this satire? Nobody seems to be able to tell me)
Every email you don’t delete is another dead fish, or another pasture unwatered. That promotional offer sent to your inbox that you ignored but did not dispose of means creeks will run dry. That evite for a party thrown by an acquaintance you don’t particularly like that you did not drop into the trash means a marathon runner will go thirsty as the nectar of life so required is absent, consumed instead by the result of your inbox neglect.
Looks like the bologna engine generated some balogna.
I have not tried it yet, but apparently there is an open source alternative for github called https://codeberg.org/. Might be useful.
It’ll probably earn a lot of users if and when Github goes down the shitter. They’ve publicly stood with marginalised users before, so they’re already in my good books.
It’ll probably earn a lot of users if and when Github goes down the shitter.
I’d argue GH is well on it’s way, probably jumped around the time Hacktoberfest morphed into a DDoS on maintainers. Or maybe more recently, when they handed peoples repos (and API keys lol) over to Copilot. Or maybe earlier, when they started calling their users “maintainers” instead of “developers”. Sometime in the last 6 years though.
There have been a number of contenders over the years - gitlab, gitea but none of them have been able to brand/market well enough to really to really impact GH or to compete with the subsidized free storage and Actions credits plus switching costs. Even Atlassian / BB is largely irrelevant.
names for genai people I know of so far: promptfans, promptfondlers, sloppers, autoplagues, and botlickers
any others out there?
cogsuckers
clanker
edit: this may be used to refer to the chatbots themselves, rather than those who fondle chatbots
clanker wanker
Ice cream head of artificial intelligence
promptfarmers, for the “researchers” trying to grow bigger and bigger models.
/r/singularity redditors that have gotten fed up with Sam Altman’s bs often use Scam Altman.
I’ve seen some name calling using drug analogies: model pushers, prompt pushers, just one more training run bro (for the researchers); just one more prompt (for the users), etc.
New Blood in the Machine about GPT-5’s dumpster fire launch: GPT-5 is a joke. Will it matter?
Not a sneer, but I recently saw Ari K’s AI generated video of Trump in his golden ballroom. It’s quite good, here is the channel: https://m.youtube.com/@AriKuschnir
Looking at his other videos, he is a talented story teller. Most videos are about two minutes, has numerous short shots of a few seconds and a voice over or music connecting the shots. So presumably he generates the shots, splice them together and puts the soundtrack over the it. Most of the short stories are dreamlike. To the extent it has characters it’s famous people (getting their comeuppance), so even though they look a bit different in each shot, it’s easy to keep track.
I think it’s interesting because by doing what can be done with the tools, it illustrates the limitations. In the hands of a good story teller you essentially get an illustration for a short radio play (and the radio play needs to be recorded separately, and you can’t show actors talking). Because of the bubble and investor bux, it can right now be done on a shoe string budget.
But that’s all! Are illustrated radio plays replacing feature films? No, so this remains a niche use case. And once the investor bux dries up, potentially an expensive one. Not something to build a billion dollar industry on.
Where are you getting “talented storyteller” from? The whole thing is some heavy-handed ham-fisted fever dream that I would expect from some liberal engagement farm. And “illustrated radio play?” What are you even on about.
The video looks like garbage and the rapid cuts are severely grating. The construction is lackluster and the content is garbage. It appears you have brought us a piece of the internet to throw into the garbage bin.
What I’m on about? I think the english term is “damning with faint praise”. If this is the best that can be done, which I am arguing, there isn’t much use to it.
The latest one is an outlier, in that it doesn’t have a voice over, so it isn’t a radio play. Most of the other ones I have seen has a voice track that tells the story. They are also more dreamlike which matches the prediction of what kind of story can be told from one of the comment threads here (from one of the pivot videos about VEO).
The latest one (and the only one to gone viral) is actually interesting in that he is trying to tell a visual story, but with the medium he has chosen he can’t have a novel character as protagonist really, or dialogue, which is why it is limited to a very simple story.
I’m interested in why it is so limited, because I think that tells a lot of the limitations of the technology as such.
Yeah. I think there’s definitely something interesting here, but it’s mostly in how badly compromised the final pproduct ends up being in order to support the AI tools.
How much energy was used to produce that video?
I’m sorry, I believe that there are legitimate artistic uses of neural networks (and they’re never about cutting budget), but this is just fascist aesthetics repurposed to serve anti-Trump messaging. Do not like.
Ozy Brennan tries to explain why “rationalism” spawns so many cults.
One of the reasons they give is “a dangerous sense of grandiosity”.
the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts.
Yep, you heard it right. Shitposting and inconsequential code are the proper way to save the world.
But doctor, I am L7 twitter manager Pagliacci
oldskool OSI appmanager is oldskool
(…sorry)
I’m gonna need this one explained, boss
(in networking it’s common terminology to refer to “Lx” by numerical reference, and broadly understood to be in reference to this)
Aaaaa gotcha. It’s probably obvious but in my case I meant L7 manager as in “level 7 manager”, a high tier managerial position at twitter, probably. I don’t know what exact tiering system twitter uses but I know other companies might use “Lx” to designate a level.
I figured, but I couldn’t just let a terrible pun slip me by!
Overall more interesting than I expected. On the Leverage Research cult:
Routine tasks, such as deciding whose turn it was to pick up the groceries, required working around other people’s beliefs in demons, magic, and other paranormal phenomena. Eventually these beliefs collided with preexisting social conflict, and Leverage broke apart into factions that fought with each other internally through occult rituals.
They sure don’t make rationalism like they used to.
JFC
Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.
First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.
Second, rats don’t take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.
Unfortunately, in the spring of 2020, the CDC was discouraging people from wearing masks, and was saying masking would do more harm than good:
U.S. health authorities had discouraged healthy Americans from wearing facial coverings for weeks, saying they were likely to do more harm than good in the fight against the coronavirus — but now, as researchers have learned more about how the highly contagious virus spreads, officials have changed their recommendations.
U.S. health authorities have long maintained that face masks should be reserved only for medical professionals and patients suffering from COVID-19, the deadly disease caused by the coronavirus. The CDC had based this recommendation on the fact that such coverings offer little protection for wearers, and the need to conserve the country’s alarmingly sparse supplies of personal protective equipment.
I pretty clearly remember the mainstream media and various liberal talking heads telling people not to mask up back then - mostly because the US was completely unprepared for a pandemic, and they thought they had to discourage people from buying masks to make sure hospitals would have enough.
Meanwhile, the right-wing prepper types were breaking out the N95 masks they’d stockpiled for a pandemic, warning each other COVID was much more contagious and lethal than the government wanted to admit, passing around conspiracy theories about millions of deaths in China covered up by the CCP, and patting themselves on the back for stockpiling masks before the government took them off the shelf.
Then some analyst told Trump that letting COVID spread unchecked would hurt blue states worse than red states, so he had Fox News start anti-masking talking points, and all those conservative foot soldiers put away their masks and became super spreaders for Jesus.
But yeah. During that period from like January to March 2020, the political division around COVID was basically the opposite of what it became, and I can easily believe some “rationalists” were calling bullshit on the CDC suddenly telling people not to buy masks.
That’s how I remember it too. Also the context about conserving N95 masks always feels like it gets lost. Like, predictably so and I think there’s definitely room to criticize the CDC’s messaging and handling there, but the actual facts here aren’t as absurd as the current fight would imply. The argument was:
- With the small droplet size, most basic fabric masks offer very limited protection, if any.
- The masks that are effective, like N95 masks, are only available in very limited quantities.
- If everyone panic-buys N95 the way they did toilet paper it will mean that the people who are least able to avoid exposure i.e. doctors and medical frontliners are at best going to wildly overpay and at worst won’t be able to keep supplied.
- Therefore, most people shouldn’t worry about masking at this stage, and focus on other measures like social distancing and staying the fuck home.
I think later research cast some doubt on point 1, but 2-4 are still pretty solid given the circumstances that we (collectively) found ourselves in.
Meanwhile, the right-wing prepper types were breaking out the N95 masks they’d stockpiled for a pandemic
This included Scott ssc btw. Who also claimed that stopping smoking helped against cov. Not that he had any proof (the medical science at the time even falsely (it came out later) claimed smoking helped agains covid). But only the CDC gets judged, not the ingroup.
And other Scott blamed people who sneer for making covid worse. (While at sneerclub we were going, take this seriously and wear a mask).
So annoying Rationalists are trying to spin this into a win for themselves. (They also were not early, their warnings matched the warnings of the WHO, looked into the timelines last time this was talked about).
“The common people pray for anime memes, healthy vtubers, and a wikipedia article that never ends,” Ser Jorah told her. “It is no matter to them if the high lords play their game of tweets, so long as they are left in peace.” He gave a shrug. "They never are.”
- George R. R. Martin
Tante fires off about web search:
There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal.
And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.
On a personal sidenote, part of me suspects webrings and web directories will see a boost in popularity in the coming years - with web search in the shitter and AI crawlers being a major threat, they’re likely your safest and most reliable method of bringing human traffic to your personal site/blog.
I’ve often called slop “signal-shaped noise”. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative “AI” is good at, making spam that’s hard to detect.
It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue “A Plan for Spam” launched Paul Graham’s notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of “off” worked for your specific inbox.
Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all strategies we developed. That same blob-like malleability of spam filters make the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.
I wonder what PG is saying about gen-“AI” these days? let’s check:
“AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
He shared no examples, but […]Who would have thought that A Plan for Spam was, all along, a plan for spam.
It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.
This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.
Signal shaped noise reminds me of a wiener filter.
Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course
Not a sneer but a question: Do we have any good idea on what the actual cost of running AI video generators are? They’re among the worst internet polluters out there, in my opinion, and I’d love it if they’re too expensive to use post-bubble but I’m worried they’re cheaper than you’d think.
I know like half the facts I would need to estimate it… if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro… https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn’t have a queue time… and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.
IDK how much GPU-time you actually need though, I’m just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.
This does leave out the constant cost (per video generated) of training the model itself right. Which pro genAI people would say you only have to do once, but we know everything online gets scraped repeatedly now so there will be constant retraining. (I am mixing video with text here so, lot of big unknowns).
If they got a lot of usage out of a model this constant cost would contribute little to the cost of each model in the long run… but considering they currently replace/retrain models every 6 months to 1 year, yeah this cost should be factored in as well.
Also, training compute grows quadratically with model size, because its is a multiple of training data (which grows linearly with model size) and the model size.
Well that’s certainly depressing. Having to come to terms with living post-gen AI even after the bubble bursts isn’t going to be easy.
Keep in mind I was wildly guessing with a lot of numbers… like I’m sure 90 GB vRAM is enough for decent quality pictures generated in minutes, but I think you need a lot more compute to generate video at a reasonable speed? I wouldn’t be surprised if my estimate is off by a few orders of magnitude. $.30 is probably enough that people can’t spam lazily generated images, and a true cost of $3.00 would keep it in the range of people that genuinely want/need the slop… but yeah I don’t think it is all going cleanly away once the bubble pops or fizzles.













