Reminds me of my generator for Irish knots I have written decades ago for metafont. It produced woven band segments with "random" faults that a TeX macro could then use to put a frame around the page at shipout time.
“It’s impossible to convey emotions through text, and this helps the reader understand your intent,”
No, it's not. It may be impossible for you, because you're an uneducated spawn who hasn't read more than half a book in your life. And, that, only because some English teacher demanded it once.
While the article is actually well written, so you really should give it a go, the answer it gives is in short: "no"
Slightly longer:
...
But the new fonts—and the odd assortment of paraphernalia that came before them—assume that dyslexia is a visual problem rooted in imprecise letter recognition. That’s a myth, explains Joanne Pierson, a speech-language pathologist at the University of Michigan. “Contrary to popular belief, the core problem in dyslexia is not reversing letters (although it can be an indicator),” she writes. The difficulty lies in identifying the discrete units of sound that make up words and “matching those individual sounds to the letters and combinations of letters in order to read and spell.”
In other words, dyslexia is a language-based processing difference, not a vision problem, despite the popular and enduring misconceptions. “Even when carefully explained, soundly discredited, or decisively dispatched, these and similar dyslexia myths and their vision-based suppositions seem to rise from the dead—like the villain-who-just-won’t-die trope in a B movie,” the International Dyslexia Association forcefully asserts.
...
I just don't like it. If we're going to mandate a specific font on government products, make it an accessible one, not just an aesthetic one. Like, one that's designed for heightened legibility. Further, don't make it a platform-dependent font. They didn't pick Helvetica, for instance.
Which brings us back to the fact that you need a human with expertise/experience to understand that the LLM isn't performing as well as a human creator, just (potentially) faster and more recklessly, and to fix the issues with the output to make it actually usable/functional, which just feels like correcting someone else's homework for them while they get the credit for the result.
Every rational person understands that tools like LLMs are useful for certain specialists and professionals to do a B+ job of some mundane work. It does not replace that expertise. Tragically, for many, a B+ that includes a few things that are just untrue and sometimes is actually a D+ is fine as long as it means that you can avoid paying someone.
I don't know if this is relevant, but after reading the article, I asked Grok (Yes, I know, xAI, but I find it better to experiment on) and it also hallucinated until it was forced to search and correct itself. So, basically, AI, that doesn't have constant access to the internet will hallucinate answers to any question, and should not be trusted if there are no sources where you could look and see if it's saying anything coherent.
Of course, people who have used and knew of AI already know this, I just find the article's technical explanation interesting and wanted to express my own interpretation.
I always assumed it was a ligature of English “at” and knew nothing of its long European history, predating English itself. Its relation to “at” is apparently coincidental, and that coincidence is probably why it ended up on everyone’s typewriters and later keyboards.
“&” came from Latin, from a ligature of “et” (“and”). At various times & places, it was considered a letter of the Latin & English alphabets. See also Et cetera.
Typography
Hot