

Paywall = clickbait. If they want your money they can say anything just to get you to click.


Paywall = clickbait. If they want your money they can say anything just to get you to click.


I’ve listened to Ruth Porat speak before and nothing about this article matches that. It feels fake or taken wildly out of context. As a general rule she doesn’t say much that isn’t already publicly released, and this doesn’t match any of the statements google’s released recently.
For the uninitiated, she’s the kind of person who would say “you bring up a great point” before then explaining why you’re wrong. So it feels disingenuous to not include the full sentence in the quote, and then to also not link to the source video is sus.
Edit: Thanks commenter🫡 They did link the source, at least, but yeah my other points.


Related PSA: grok is the top rated AI app in the play store, and we can fix that


How many people? What percentage of users?
Ew, this article is an ad for another company. I feel icky when people try to monetize my basic human rights.
Here’s a not for profit article alternative: https://oit.utk.edu/security/learning-library/article-archive/privacy-onboard-ai-google-gemini/


Dear Sergey,
if you object to the specific word “genocide”, then perhaps we should start calling it the “mass murder and starvation that has killed 5-10% of all Palestinian civilians”. That doesn’t sound much better, though, does it?
One of the key things that some in the “UN is anti-semitic” camp likes to claim is that the UNHRC has many nations that are non-democratic and have known human rights abuses.
But that’s somewhat cherrypicked. Yes, the nations proposing measures against Isreal do seem like a biased set. But most of the Western democracies ultimately vote for those measure, with only the US rejecting it.


I know the author will probably never read this, but in the off chance they or someone else working on open source accessibility reads it:
Thank you. So much ❤️ Your hard work and dedication keeps me going when times get rough.
And thanks for the rant. Nobody should have to suffer in silence, and that includes you. So any time you want to rant at us leeches about working in open source, please don’t hesitate to put us in our place.
I hope that we find a way to make the internet a more positive and celebratory place for people like you who do hard work the rest of us don’t have the time or energy for.


Just fyi, console can be more usable than gui for many disabled people. Text-to-Speech software relies on there being text, and mouses require fine motor movements.


I’d encourage you to research more about this space and learn more.
As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.
I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.


I see a lot of misunderstandings in the comments 🫤
This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.
Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.


Beautiful! I’ll definitely give this a go


python -m http.server is still my media server of choice. It’s never let me down.
deleted by creator
It only seems to make a difference when the rich ones complain.


Obvious ragebait article
I read the article, and it’s way less bad than the title made it sound. They just set company chats to disappear after some number of days and told employees to not “comment before you have all the facts.” This has been the policy of every company I’ve worked at, including university IT and Amazon.
The title made it sound like they were deleting specifically chats related to open court cases, which is like level 10 ultra-illegal.


Link to the actual paper: https://www.nature.com/articles/s41586-025-08839-w
The repro and verification will take time. Months or even years. Don’t trust anyone who says it’s definitely real or definitely bunk. Time will tell.


Woah, this is great!
A lot of these things ring true from my experience in the US government as well. There is a lot of waste from contracting and a lot of fear of the unknown.


The problem is supporting ad networks.
Edit: /s because apparently it wasn’t obvious. Anonymous is obviously better.
I don’t see why either side is trying to disown the dude. He turned himself in. That’s so brave, and the right thing to do regardless of affiliation.