

The Atlantic has a take on this too. Sam Altman is Losing His Grip on Humanity.


The Atlantic has a take on this too. Sam Altman is Losing His Grip on Humanity.


Somehow I had missed the boat on Donald Boat and now I have so many questions. Absolutely wild read.


It does seem more and more like the most relevant parallel is radicalization, particularly the concerns about algorithmic radicalization and stochastic terrorism we got back in the early 2010s. The machine system feeds the user back what they’ve put into it, validating that input and pushing the user into more extreme positions. When it happens through a community (“classical” radicalization) the fact that the community needs to persist serves to mediate or at least slow the destructive elements of the spiral. Your Nazi book club/street gang stops meeting if people go to prison, lose their jobs/homes, etc. Online communities reduce this friction and allow the spiral to accelerate to a great degree, but the group can still start eating itself if it accepts the wrong level of unhingedness and toxicity.
Algorithmic/Stochastic radicalization, where the user moves through a succession of media environments and (usually online) communities can allow things to accelerate even more because the user no longer actually has to maintain long-term social ties to remain engaged in the spiral. Rather than increasingly-destructive ideas echoing around a social space, the user can chase them across communities, with naive content algorithms providing a solid nudge in the right direction (pun wholly intended). However, the spiral is still dependent on the ability of the relevant media figures and communities to persist, even if the individual users no longer need a persistent connection to them. If the market doesn’t have space for a creator then their role in that network drops. Getting violent or destructive content deplatformed also helps slow down the spiral by adding friction back into the process of jumping to the next level of radicalism. Past a certain point you find yourself back in the world of needing to maintain a community because the ideology has gotten so rotten that there’s no profit in entertaining it. Past that you end up back with in-person or otherwise high-friction high-trist groups because the openness of a low-friction online community compromises internal security in ways that can’t be allowed when you’re literally doing crimes.
Chatbot-induced radicalization combines the extreme low friction of online interactions with an extremely high value validation and a complete lack of social restrictions. You don’t have to retain a baseline connection to reality to maintain a relationship with a chatbot. You don’t have to make connections and put in the work to find a chatbot to validate your worst impulses the same way that you do to join a militia. Your central cause doesn’t have to be something to motivate anyone outside yourself. Your local KKK chapter probable has more on its agenda than hating your ex-wife (not that it doesn’t make the list, of course), but your chatbot instance will happily give you an even stronger echo chamber no matter how narrow the focus. And unlike the stigma associated with the kinds of hate groups and cults that would normally fill this role for people, the entire weight if the trillion-dollar tech industry seems to be invested in promoting these chatbots as reliable and trustworthy – even more so than the experts and institutions that are supposed to provide an anchor to counter this kind of descent. That’s the most dangerous part of our Very Good Friends’ projects on the matter. That’s how you get relatively normal people to act like they’re talking to God and He’,s telling them everything they don’t want to admit they want to hear.


Fixed-position office chairs? What goddamn vibe-brained rat-pilled techbro started convincing people to forgo the single best redeeming factor of office chairs?


Since the advent of ChatGPT in November 2022, the number of monthly submissions to the arXiv preprint repository has risen by more than 50% and the number of articles rejected each month has risen fivefold to more than 2,400 (see ‘Rejection rates climb’).
If I’m interpreting this right then the growth in the number of rejections is wildly outpacing the growth in submissions, which means not only are we getting a tsunami of slop but that the bad papers are actively chasing away good ones.


diamondoid. None of the derivatives I can come up with sound anywhere near as dumb as the actual word.


In economic terms it’s less rent seeking and more rent creation. Like, taking advantage of public sidewalk space may not be a rent in the strictest sense given that the revenue model is still people paying for the service, but the ability to provide that service is absolutely predicated on taking over and monopolizing this public resource to the maximal degree possible.
By historical allegory, harkening back to the original destruction of the Commons, we’re looking at Enclosure 2: Frisco Drift.
Let’s also not lose sight of the fact that those sidewalks aren’t a natural formation, and that it’s the city government who ultimately takes on the burden of their construction and maintenance. This kind of neo-enclosure of public resources is then another kind of invisible subsidy.


“even safer” in this case means some combination of two things:
The new organization is more ideologically aligned with the transhumanist doom cult that apparently managed to eat the brains of the people with money to burn.
The new organization, largely as a result of this, is capable of sinking an unending amount of capital into buying compute time and Nvidia chips but due to their commitments to safety is even less inclined to actually deliver anything.


Microsoft is really putting the “git” in GitHub thanks to copilot.


I found the comment about models creating very old-fashioned “18th century style” proofs very interesting. Not surprising in retrospect since older proofs are going to be reproduced more across the training data compared to newer ones, but it’s still interesting to note and indicative of the reproduction that these things are doing.


I would go so far as to try and find a suitably precocious undergrad to run the test that they themselves are capable of guiding and nudging the model the way OpenAI’s team did but not of determining on their own that the conjecture in question is false. OpenAI’s results here needed a fair bit of cajoling and guidance, and without that I can only assume it would give the same kind of non-answer regardless of whether the question is in fact solvable.


The point about heavy artillery is actually pretty salient, though a more thorough examination would also note that “Lethal Autonomous Weapons Systems” is a category that includes goddamn land mines. Of course this would serve to ground the discussion in reality and is thus far less interesting to people who start organizations like the Future of Life Institute.


I mean I’d like to think that if someone was going to pull the paid protestor bit that they wouldn’t fail quite so embarrassingly, but then I think about our current crop of people in power and how fucking cooked they are and they can absolutely see them thinking “what does a mass protest cost, like $500?”


The grass was literally right there, guys.


This passage from the above-linked lw post from the “stone age billionaire” guy about how counter-protestors tried to prevent them from speaking is really telling:
Then my friend Ben realized that this is a giant game of “I’m not touching you” for adults. Which is the stupidest dang thing IMO, but is pretty symetrical. A few of us just stood close together in a line, and we moved the speeches to the other side of that line. The counter-protestors would have to walk through us to block that speech, and we just didn’t move. When they tried to go around us we shifted to be in front of them. And they couldn’t actually touch us because that was against the rules, so this worked?
I thought we all understood at this point that baiting your enemies into violence was one of the operating principles behind a lot of protests like this. Maybe not as a primary goal (unless you’re the Westboro Bastard Church trying to get ammunition for lawsuits against the host city for failing to adequately protect you from the consequences of your own actions) but historically speaking violent repression isn’t exactly a failure state for these events. One of the biggest victories for civil disobedience was putting the violent absurdity of segregation on full display by getting massive crackdowns on them for sitting in a restaurant, for example. Making the implicit violence of injustice explicit changes the emotional valence and makes it harder for John Q Public to justify actively supporting it. If you don’t have enough mass support to implicitly threaten to do something (i.e. look at all these people who will cause problems if not recognized) then arguably being repressed is an even more significant goal because showing that “about two-dozen kooks believe something” isn’t exactly going to mobilize social change on its own and it’s not like billionaires care about solidarity with the hoi polloi.
But considering the absolute bafflement on display about counterprotests being willing to rudely inconvenience them it really feels like they understood that sometimes people who believe things will do this thing called a “protest” where they get together and chant slogans and wave signs and have a grand old time, but had no coherent idea of why and never really thought to ask.


Tell Luna you’ve been a bad boy/girl/etc and need to be punished and see what she does.


I was taught to take off every Zig, not install them! Clearly it was a more innocent time.


I wonder if this is going to hold out long enough to get some obnoxious AI-first language created that is designed to have as obnoxiously picky of a compiler as it can in order to try and turn runtime errors that the model can’t cope with into compile failures which it can silently retry until they’re ‘fixed’
You know, it would be interesting if the “AI blog” keeps illustrating his descent into madness and hallucinates that he like leaves his partner for “her” etc. because that’s how these stories go even in the hopeful case that he recovers before doing any more serious damage.