So, you might have heard about > or ≥. But are you ready for...
≩ U+2269 GREATER-THAN BUT NOT EQUAL TO
⋛ U+22DB GREATER-THAN EQUAL TO OR LESS-THAN (just remember that ⋛ is not equal to ⋚)
⪌ U+2A8C GREATER-THAN ABOVE DOUBLE-LINE EQUAL ABOVE LESS-THAN (compare to ⪋, ⪒, and ⪑)
And of course, let's not forget the classic ⪔ U+2A94 GREATER-THAN ABOVE SLANTED EQUAL ABOVE LESS-THAN ABOVE SLANTED EQUAL (don't mix that one up with ⪓)
My favorite fact about recycling is that 100% recycled plastic is often worthless, so instead, you add 5% of recycled material to a vat and make 10,000 shampoo bottles out of that, and then you take 500 of these bottles and label them "100% recycled".
Which is not wrong and yet terribly wrong at the same time
October is Cybersecurity Awareness Month! Please be aware of cybersecurity. If you encounter cybersecurity, DO NOT APPROACH IT. Back away slowly. Protect children and pets. Make noises to scare it away.
There are many claims that for some products, LLMs now write upward of 90% of the code.
No one publishes stats of the percentage of online articles that are written by LLMs, but I suspect the number is similar.
I'm nearly certain this is true for the average post that crops up on the internet.
For newspaper articles, I don't think it's 90%, but it's high. IIRC, I've seen obviously LLM-generated content on Newsweek, etc. And some of it isn't obvious.
@jerry Yeah, but I'd make a distinction between "hey ChatGPT, write an article about <x>" and "hey ChatGPT, can you make some style suggestions for what I've written".
Of course, there are some shades of gray in between.
Social media gave everyone a printing press, then we discovered that most people use printing presses the same way they use bathroom stalls: to write inflammatory things they'd never say to someone's face...
The more embedded programming I do, the more amazed I am that anything works at all.
Even in a single-core, single-process bare-metal environment, you obviously get the standard range of reentrancy issues. Hardware interrupts - including nested interrupts - can fire at any time.
There are some big "huh" moments around stuff like DMA. If you request a DMA transfer, it's handled by a separate piece of silicon and your CPU doesn't necessarily know what transpired. The main memory is updated by the DMA controller, but the CPU's cache is not. The CPU might end up with a stale cache even if you haven't touched that memory region before: after all, it does speculative prefetching! Worse, if you did touch the region, the CPU may flush old pending writes after the DMA transfer, clobbering your data.
But sometimes, the problems are simpler. I had a program that ran at full clock speed when compiled with -O3, but at something like 1 MHz when not. Yep, "huh".
There was a function that allowed clock speed to be toggled for power save purposes, roughly this:
This was a two-step task: unlock clock operations by writing to a memory-mapped register called CCP, then manipulate OSCHFCTRLA to set clock speed. Easy enough.
Fine print not included in online examples: the sequence must take at most four CPU cycles. For reasons.
With -O3, the code was simply inlined, the parameter always being a compile-time constant. So, essentially just two insns.
Without -O3? Well, you had a conditional to calculate the value to write to OSCHFCTRLA based on the function parameter. Boom.
I like bug bounties and I put a fair amount of effort into bootstrapping the one at Google back in the day, but I think the problem runs deeper than AI.
First, most people who make a living doing bug bounties don't go after $10,000+ bugs. Very few researchers can crank out top-notch find month after month. A much better strategy on these platforms is to go after low-hanging fruit and rake in $500 to $1,000 bugs every day.
Companies respond accordingly! Because most of the traffic are low-value vulns from less skilled researchers, you don't want to throw your best analysts at this. It's increasingly common to outsource triage and bug-filing for bug bounty programs.
But if the person doing the triage isn't highly paid and familiar with the systems in question, there is a strong incentive to err on the side of caution. If you incorrectly close something serious as a non-issue, you risk the researcher making a stink. Conversely, if the triager files a non-issue with the product team, they'll probably fix it anyway, and the only cost is some wasted time.
The result is that in most programs, there's no penalty for slop. And researchers exploit this with spray-and-pray tactics. Why not?
I think one issue here are platforms that make it easy to window-browse for bug bounty programs. They have plenty of advantages, but it's a race to the bottom because the least diligent vendors set the bar for participation for all.
A slightly unhinged calculator fact: in the golden era of electronic calculators, some Japanese shopkeepers were reluctant to trust the newfangled tool, so Sharp made a line of combination calculator / abacus devices.
How to tell that you're valued as a customer in 2025: if you need to wait 45 minutes to be connected to a representative, you know they're not using an LLM
So, if you're an old person, you probably remember the Crown Sterling "time AI" presentation at RSA. It happened six years ago and was the reason why people said "never again" to the conference before going there again and then getting upset about the next thing.
Anyway - I was putting some finishing touches on the article about complex numbers and came across this relevant paper (attached pics).
And while I don't want to dox the author, I did look up their bio and he "has focused on cryptography and data security. He founded Crown Sterling, a company that has developed innovative quantum-resistant encryption methods based on Quasi-Prime numbers"
So, I think we're officially a non-infosec publication, baby!
Makes me wonder - are there other relatively basic introductory articles that folks are interested in? I can write about math, chemistry, computers. I'm not the smartest person out there, but I have a keyboard and a blog
Here's the thing. When a browser, an app, or a website encourages you to turn on an awesome feature, it's almost always because a lawyer who understands the feature said "whoa, we're gonna be in real trouble if we don't have consent".
I think that Wikipedia is one of the best things that happened on the internet, but it isn't above criticism. Even if it's coming from your political enemies.
There are three issues at play. First, the Wikimedia Foundation. Their finances and activities are really hard to defend. They have too much money and they're spending it on sketchy stuff. The best argument for donating is that the excesses are OK if it keeps the server bills paid. But man...
The second issue is the Wikipedia editing culture. The editing standards work in >99% of all cases, but on some hot-button issues, the encyclopedia does have an editorial stance and weighs the sources to get the desired outcome. It's OK, but they need to own it. An editorial stance is not a crime.
The third issue is lone crackpots. In some niches, you have an editor with novel theories about UFOs, a beef with a minor celebrity, or a passion for Marxist apologetics. It sucks. The community could be doing better, but a big part of the problem is just that there aren't enough people watching and pushing back.
@molly0xfff
@dave On the second point, to pick a timely example: consider "gravity is a thing" vs "the Gulf of Mexico should be still called the Gulf of Mexico".
In both cases, the usual Wikipedia response is that the encyclopedia has no opinion and is just reporting on the consensus. But the latter is an editorial policy decision, justified post-hoc by adding a section that defers to the all-important International Hydrographic Organization as the naming authority.
In contrast, Encyclopedia Britannica just posted a note saying "we're not renaming it because our audience wouldn't like it and we don't think the US has the authority". I think that's a better approach.
The thing about social media is that it's easy to get a lot of followers if you post enough memes, but the engagement is stretched so thin that it no longer means anything.
Your privacy is very important to us. This is why we're sharing your data with our 278 advertising partners, and our partners' 4,728 partners, and their partners' 87,392 partners, UNDER THE FOLLOWING TERMS
So I'm not a computer person, but it feels like there should be a way to design Mastodon so that if you post a link here, it doesn't get fetched SEVEN THOUSAND TIMES
C pro trick! Let's say you have code like this, and you want to comment out a block of code - let's say, lines 5 to 7 - with some nested comments. A pain, right?
I keep coming back to this, but hug your content creator today.
The internet has a bystander problem. We discover insightful content on the web, we assume the author already received the spoils - and we move on.
To offer a personal anecdote, I'm the author of afl-fuzz. It's been used by tens of thousands of folks - for hobby, for work, to elevate academic careers. I fielded hundreds of bug reports and feature requests - and perhaps two or three personal "thank you" notes.
Today, I'm running lcamtuf.substack.com. Some articles get 50k+ views. It works the same: there are far more folks keen to point out errors or post contrarian takes on HN.
I'm not fishing for compliments for myself. It's just that, the next time you come across a useful OSS project or an interesting blog, drop the author a note. No one else does.
"Meta's Threads Now Has More Daily US Users Than Musk's X"
Users: "we did it, Patrick! We saved the city!"
You traded a smallish platform ran by an alt-right memelord for a platform operated by a privacy-hostile conglomerate that controls how 3.5 billion people get their news.
Whatever people think ails the internet, Facebook is far more to blame than Twitter ever was.
Twitter is and was a pretty niche platform, with DAU comparable to Quora and LinkedIn. It's just that a bunch of journos were really addicted to it.