I’m a data analyst and primary authority on the data model of a particular source system. Most questions for figures from that system that can’t be answered directly and easily in the frontend end up with me.
I had a manager show me how some new LLM they were developing (which I had contributed some information about the model to) could quickly answer some questions that usually I have to answer manually, as part of a pitch to make me switch to his department so I can apply my expertise for improving this fancy AI instead of answering questions manually.
He entered a prompt, got a figure that I knew wasn’t correct and I queried my data model for the same info, with a significantly different answer. Given how much said manager leaned on my expertise in the first place, he couldn’t very well challenge my results and got all sheepish about how the AI still in development and all.
I don’t know how that model arrived at that figure. I don’t know if it generated and ran a query against the data I’d provided. I don’t know if it just invented the number. I don’t know how the devs would figure out the error and how to fix it. But I do know how to explain my own queries, how to investigate errors and (usually) how to find a solution.
Anyone who relies on a random text generator - no matter how complex that generation method to make it sound human - to generate facts is dangerously inept.
I don’t know how the devs would figure out the error and how to fix it.
This is like the biggest factor that people don’t get when thinking of these models in the context of software. “Oh it got it wrong, but the developers will fix it in an update”. Nope, they can fix traditional software mistakes, LLM output and machine learning things… They can throw more training data at it (which sometimes just changes what it gets wrong) and hope for the best, they can do better job at curating the context window to give the model the best shot at outputting the right stuff (e.g. the guy who got Opus to generate a slow crappy buggy compiler had to traditionally write a filter to find and show only the ‘relevent’ compiler output back to the models), they can try to generate code to do what you want and have you review the code and correct issues. But debugging and fixing the model itself… that’s just not a thing at all.
I was in a meeting where a sales executive was bragging about the ‘AI sales agent’ they were working, but admitting frustration with the developres and a bit confused why the software developers weren’t making progress when those same developers always made decent progress before, and they should be able to do this even faster because they have AI tools to help them… It eternally seemed in a state that almost worked but not quite no matter what model or iteration they went to, no matter how much budget they allocated, when it came down to the specific facts and figures it would always screw up.
I cannot understand how long these executives wade in the LLM pool and still believes in capabilities beyond what anyone has experienced.
I cannot understand how long these executives wade in the LLM pool and still believes in capabilities beyond what anyone has experienced.
They leave the actual work to the boots on the ground so they don’t see how shitty the output is. They listen to marketing about how great it is and mandate everyone use it and then any feedback is filtered through all the brownnosers that report to them.
It eternally seemed in a state that almost worked but not quite no matter what model or iteration they went to, no matter how much budget they allocated, when it came down to the specific facts and figures it would always screw up.
This is probably the biggest misunderstanding since “Project Managers think three developers can produce a baby in three months”: Just throw more time and money at AI model “development” for better results. It supposes predictable, deterministic behaviour that can be corrected, but LLMs aren’t deterministic ny design, since that wouldn’t sound human anymore.
Sure, when you’re a developer dedicated to advancing the underlying technology, you may actually produce better results in time, but if you’re just the consumer, you may get a quick turnaround for an alright result (and for some purposes, “alright” may be enough) but eventually you’ll plateau at the limitations of the model.
Of course, executives universally seem to struggle with the concept of upper limits, such as sustainable growth or productivity.
Apparently that reddit post itself was generated with AI. Using AI to bash AI is an interesting flex.
How did people find out it was AI generated? Seems natural to me. Scary.
Have any evidence of that? The only thing I saw was commentors in that thread (who were obvious AI-bros) claiming it must be AI generated because “it just wouldn’t happen”…
I-want-to-believe.jpg
I guarantee you this is how several, if not most, fortune 500 companies currently operate. The 50k DOW is not just propped up by the circlejerk spending on imaginary RAM. There are bullshit reports being generated and presented every day.
I patiently wait. There is a diligent bureaucrat sitting somewhere going through fiscal reports line by line. It won’t add up… receipts will be requested… bubble goes pop
When you delegate, to a person, a tool or a process, you check the result. You make sure that the delegated tasks get done and correctly and that the results are what is expected.
Finding that it is not the case after months by luck shows incompetence. Look for the incompetent.
Yeah. Trust is also a thing, like if you delegate to a person that you’ve seen getting the job done multiple times before, you won’t check as closely.
But this person asked to verify and was told not to. Insane.
100%
Hallucinations are widely known, this is a collective failure of the whole chain of leadership.
Problem being is that whoever is checking the result in this case had to do the work anyway, and in such a case… why bother with the LLM that can’t be trusted to pull the data anyway?
I suppose they could take the facts and figures that a human pulled and have an LLM verbose it up for people who for whatever reason want needlessly verbose BS. Or maybe an LLM can do a review of the human generated report to help identify potential awkward writing or inconsistencies. But delegating work that you have to do anyway to double check the work seems pointless.
Like someone here said “trust is also thing”. Once you check a few time that the process is right and the result are right, you don’t need to check more than ponctually. Unfortunatly, that’s not what happened in this story.
Leopard meets face.
Tbf at this point corporate economy is made up anyway so as long as investors are gambling their endless generational wealth does it matter?
This is how I’m starting to see it too. Stock market is just the gambling statistics of the ownership class. Line goes down and we’re supposed to pretend it’s harder to grow food and build houses all of a sudden.
There’s a difference. If I go and gamble away my life savings, then I’m on the street. If they gamble away their investments, the government will say ‘poor thing’ and give them money to keep the economy ok.
Ah yes, what a surprise. The random word generator gave you random numbers that aren’t actually real.
Surely this is just fraud right? Seeing they have a board directors they have shareholders probably? I feel they should at least all get fired, if not prosecuted. This lack of competency is just criminal to me.
Are you suggesting we hold people responsible?
Ask Bernie Madoff. Scamming rich people is the one and only instance where even rich people are held accountable.
In the current world, probably the one going to jail is the one reporting it. So I don’t expect much no.
My broseph in Christ, what did you think a LLM was?
Bro, just give us a few trillion dollars, bro. I swear bro. It’ll be AGI this time next year, bro. We’re so close, bro. I just need need some money, bro. Some money and some god-damned faith, bro.
User: Hi big corp AI(LLM), do this task
Big Corp AI: Here is output
User: Hi big corp your AI’s output is not up to standard I guess it’s a waste of…
Big Corp: use this agent which ensures correct output (for more energy)
User: it still doesn’t work…guess I was wrong all along let me retry…
And the loop continues until they get a few trillion dollars
You can make something AI based that does this, but it’s not cheap or easy. You have to make agents that handle data retrieval and programmatically make the LLM to chose the right agent. We set one up at work, it took months. If it can’t find the data with a high certainty, it tells you to ask the analytics dept.
Large Lying Model?
To everyone I’ve talked to about AI, I’ve suggested a test. Take a subject that they know they are an expert at. Then ask AI questions that they already know the answers to. See what percentage AI gets right, if any. Often they find that plausible sounding answers are produced however, if you know the subject, you know that it isn’t quite fact that is produced. A recovery from an injury might be listed as 3 weeks when it is average 6-8 or similar. Someone who did not already know the correct information, could be damaged by the “guessed” response of AI. AI can have uses but it needs to be heavily scrutinized before passing on anything it generates. If you are good at something, that usually means you have to waste time in order to use AI.
I had a very simple script. All it does is trigger an action on a monthly schedule.
I passed the script to Copilot to review.
It caught some typos. It also said the logic of the script was flawed and it wouldn’t work as intended.
I didn’t need it to check the logic of the script. I knew the logic was sound because it was a port of a script I was already using. I asked because I was curious about what it would say.
After restating the prompt several times, I was able to get it to confirm that the logic was not flawed, but the process did not inspire any confidence in Copilot’s abilities.
Happy cake day, and this absolutely. I figured out its game the first time I asked it a spec for an automotive project I was working on. I asked it the torque specs for some head bolts and it gave me the wrong answer. But not just the wrong number, the wrong procedure altogether. Modern engines have torque to yield specs, meaning essentially you torque them to a number and then add additional rotation to permanently distort the threads to lock it in. This car was absolutely not that and when I explained back to it the error it had made IT DID IT AGAIN. It sounded very plausible but someone following those directions would have likely ruined the engine.
So, yeah, test it and see how dumb it really is.
Do the same to any person online, most blogs by experts, or journalists.
Even apparently easy to find data, like the specs of a car. Sucking and lying is not exclusive to LLMs.
Literally nobody suggested it was.
It was implicit in the test suggestion
This is why I hate search engines promoting AI results when you are researching for something. It is confidently giving incorrect responses. I asked for sources on one LLM model before while using Duckduckgo, and it just told me that there are no sources and the information is based on broad knowledge. At one point, I challenged the AI that it is wrong, but it insisted it doesn’t. It turns out that it is citing a years old source written by a different bot long ago. But on the one hand, most of you are probably familiar that on occasions that the AI is incorrect and you challenge it, it will relent, although it will be a sycophant even though you yourself are actually incorrect. This is Schrödinger’s AI.
I’ve said it time and time again: AIs aren’t trained to produce correct answers, but seemingly correct answers. That’s an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can’t easily verify the answer. But it seems plausible so you assume it to be correct.
Thankfully, AI is bad at maths for exactly this reason. You don’t have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop’s claims that it is vastly superior to previous models.
I’ve been through the cycle of the AI companies repeatedly saying “now it’s perfect” only admitting it’s complete trash when they release the next iteration and claim “yeah it was broken, we admit, but now it’s perfect” so many times now…
Problem being there’s a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I’m just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a “oh, those are probably going to be fixed in 4.7 or 5 or whatever”.
Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn’t make sense. They use the same version number scheme after all, so expectations should be similar.
most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop’s claims that it is vastly superior to previous models
Both of those can be true.
I mean yeah, but they specifically mentioned its amazing performance in tasks requiring reasoning
My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.
The problem is that we’ve had a culture of people who don’t know things very well control the purse strings relevant to those things.
So we have executives who don’t know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more “executive” like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn’t support their decision, just a “yes, and…” result agreeing with whatever dumbass request they thought would be correct and simple.
Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.
The problem is that we’ve had a culture of people who don’t know things very well control the purse strings relevant to those things.
I mean that has been the case for a long time, AI may enhance the effect of it, but human stupidity is nothing new.
So we have executives who don’t know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more “executive” like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn’t support their decision, just a “yes, and…” result agreeing with whatever dumbass request they thought would be correct and simple.
Once again, yes men have also been a historic phenomenon, and yes AI might speed this up, but it is nothing new per se.
Ai is a tool, not a perfect one, heck most of the time, barely functional, but it is a tool and in order to use it, you need to understand what it can do, and what it can’t do.
I think if you’re aware of the environmental impact, learn how to use it responsibly and avoid many of it pitfalls, together with a critical mindset, it can be usable for some cases.
It can be useful, sure, and yes, the myopic, self-centered lying executive is nothing new, but there are big groups now thinking they can remove whatever semblance of a check on executive decisions might be there.
And on top of that, the people who don’t know things very well generated lots of the material the LLMs were trained on in the first place.
Can’t really blame the models for realizing much of human knowledge is bullshit and acting accordingly.
The problem is, every time you use it, you become more passive. More passive means less alert to problems.
Look at all the accidents involving “safety attendants” in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I’ll sneak a peak at my phone. Well, haven’t gotten into an accident in a month, I’ll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!
AIs aren’t trained to produce correct answers, but seemingly correct answers
I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.
I use “mathmatical approximations of correct answers”
But that’s wrong. It’s not trained on correct answers. It’s trained on whatever happens to be out there in the world.
It’s mathematical approximations of words that are likely to be found near that question.
Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I’m the best case, it’s because people expect the wrong answers as that’s all they’ve been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.
They are designed to convince people. That’s all they do. True, or false, real or fake, doesn’t matter, as long as it’s convincing. They’re like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.
I use it to summarize stuff sometimes, and I honestly spend almost as much time checking it’s accurate than I would if I had just read and summarized.
It is useful for ‘What does this contain?’ so I can see if I need to read something. Or rewording something I have made a pig’s ear out of.
I wouldn’t trust it for anything important.
The most important thing to do if you do use AI is to not ask leading questions. Keep them simple and direct
It is useful for ‘What does this contain?’ so I can see if I need to read something. Or rewording something I have made a pig’s ear out of.
Skimming and scanning texts is a skill that achieves the same goal more quickly than using an unreliable bullshit generator.
Depending on the material, the LLM can be faster. I have used an LLM to extract viable search terms to then go and read the material myself.
I never trust the summary, but it frequently gives me clues as to what keywords could take me to the right area of a source material. Internet articles that stretch brief content into tedious mess, documentation that is 99% something I already know, but I need something buried in the 1%.
Was searching for a certain type of utility and traditional Internet searches were flooded with shitware that wasn’t meeting the criteria I wanted, LLM successfully zeroed in on just the perfect GitHub project.
Then as a reminder to never trust the results, I queried how to make it do a certain thing and it mentioned a command option that seemed like a dumb name that was opposite of what I asked for if it did work and not only would it have been opposite, no such option existed.
Lol. Your advice: learn to read, noob
My work is technically dense and I read all day. It’s sometimes nice when I’m mentally exhausted to see if it’s worth the effort to dig deeper in a 10 second upload. That’s all I’m getting at.
I just got around to watching some of the ads that the big AI companies aired during the Superbowl. Each time I was thinking “wow, if this is true, this person is an idiot and is in for a world of trouble”.
Like, there was one where a young farmer was supposedly taking over the family farm from her grandfather or something. She said something like “I uploaded all our data to ChatGPT and now I do what it tells me to do.” If that’s the case, wow. That farm is going to fail.
Another one was some guy who ran some kind of a machinist’s shop, and was claiming that the bookkeeping and inventory control the shop used was really old fashioned. So, he had ChatGPT create him a whole bunch of new part numbers to make online ordering easier. Again, wow. You’re trusting this key part of your business to a machine that just randomly makes stuff up?
Plausible confabulation machines.
What they can do is generate code that is totally deterministic, and then defer to those results. The fact these people aren’t doing that just makes them negligent.
I raised this as a concern at the corporate role I work in when an AI tool that was being distributed and encouraged for usage showed two hallucinated data points that were cited in a large group setting. I happened to know my area well, the data was not just marginally wrong but way off, and I was able to quickly check the figures. I corrected it in the room after verifying on my laptop and the reaction in the room was sort of a harmless whoops. The rest of the presentation continued without a seeming acknowledgement that the rest of the figures should be checked.
When I approached the head of the team that constructed the tool after the meeting and shared the inaccuracies and my concerns, he told me that he’d rather have more data fluency through the ease of the tool and that inaccuracies were acceptable because of the convenience and widespread usage.
I suspect stories like this are happening across my industry. Meanwhile, the company put out a press release about our AI efforts (literally using Gemini’s Gem tool and custom ChatGPTs seeded with Google Drive) as something investors should be very excited about.
When I approached the head of the team that constructed the tool after the meeting and shared the inaccuracies and my concerns, he told me that he’d rather have more data fluency through the ease of the tool and that inaccuracies were acceptable because of the convenience and widespread usage.
“I prefer more data that’s completely made up over less data that is actually accurate.”
This tells you everything you need to know about your company’s marketing and data analysis department and the whole corporate leadership.
Potemkin leadership.
Honestly this is not a new problem and is a further expression of the larger problem.
“Leadership” becomes removed from the day to day operations that run the organization and by nature the “cream” that rises tend to be sycophantic in nature. Our internal biases at work so it’s no fault of the individual.
Humanity is their own worst enemy lol
It is not a new problem and that has been the case for a long time. But it’s a good visualization of it.
Everyone in a company has their own goals, from the lowly actual worker who just wants to pay the bills and spend as little effort on it as possible, to departments which want to justify their useless existence, to leadership who mainly wants to look good towards the investors to get a nice bonus.
That some companies end up actually making products that ship and that people want to use is more of an unintended side effect than the intended purpose of anyone’s work.
That makes no sense. The inaccuracies are even less acceptable with widespread use!
You’re thinking like a person who values accurate information more than feeling some kind of ‘cool’ and ‘trendy’ because now you can vibe code and we are a forward thinking company that embraces new paradigms and synergizes our expectations with the potential reality our market disprupting innovations could bring.
… sorry, I lapsed back into corpo / had a stroke.
🤫
It’s technological astrology. We’re doomed.
You need to know the words to properly wake the machine spirit
The board room is more concerned with the presentation than the data, because presentations make sales.
What a lot of people fail.to understand is that for the C-Suite, the product isn’t what’s being manufactured, or the service being sold. The product is the stock, and anything that makes the number go up in the short term is good.
Lots of them have fiduciary duties, meaning they’re legally prohibited from doing anything that doesn’t maximize the value of the stock from moment to moment.
Someone please show me the criminal lawsuit against the CEO that made the moral decision and the stock went down! I’m so sick of the term fiduciary duty being used as a bullshit shield for bad behavior. When Tesla stock tanked because musk threw a Nazi salute, where were the fiduciary duty people!?
But that’s over false or misleading statements. I’m not saying you can lie, just that you don’t have to throw orphans in the meat grinder.
Further, as you hinted, long term is not their problem. They get a bump, cash in a few million dollars worth of RSUs, and either saddle the next guy with the fallout, it of they haven’t left yet “whoopsie, but I can blame the LLM and I was just following best practices in the industry at the time”. Either way they have enough to not even pretend to work another day of their life, even ignoring previous grifts, and they’ll go on and do the same thing to some other company when they bail or the company falls over.
At the moment, nothing will be done. There’s no way the current SEC chair will give a fuck about this sort of stuff.
But assuming a competent chair ever gets in charge, I expect there to be a shit show of lawsuits. It really doesn’t matter that “the LLM did it” lying on those mandatory reports can lead to big fines.
Lots of them have fiduciary duties, meaning they’re legally prohibited from doing anything that doesn’t maximize the value of the stock from moment to moment.
Overall, I agree with you that stock price is their motivation, but the notion of shareholder supremacy binding their hands and preventing them from doing things that they want to otherwise do is incorrect. For one, they aren’t actually mandated to do this by law, and secondarily, even if they were – which to reiterate, they aren’t – just about any action they take on any single issue can be portrayed as them attempting to maximize company value.
https://pluralistic.net/2024/09/18/falsifiability/#figleaves-not-rubrics
No, not illegal, but they can be sued by the shareholder for failing to maximize value.
Sure, but since it’s an unfalsifiable proposition, good luck proving it in court for any specific action.
Apparently, it does happen: https://tempusfugitlaw.com/real-life-breach-of-fiduciary-duty-case-examples-outcomes/
Particularly of note is the descision around AA’s ESG investments.
I think this is mixing things up a bit. At least some of the cases there were fraud based.
Not really, no. This is mostly a myth. Unless the executives are deliberately causing the company to lose money, they really can’t be sued based on this fiduciary duty to shareholders. They have to act in the shareholders’ best interest, but “shareholder interest” is entirely up to interpretation. For example, it’s perfectly fine to say, “we’re going to lose money over the next five years because we believe it will ensure maximum profits over the long term.” In order to sue a CEO for failing to protect shareholders, they would have to be doing something deliberately and undeniably against shareholder interest. Like if they embezzle money into their own bank account, or if they hold a Joker-style literal money burning.
If it were that easy to sue executives for violating their fiduciary duty to shareholders, golden parachutes and inflated executive compensation packages wouldn’t exist. But good luck suing a CEO because he’s paid too much. He can just claim in court that his compensation will ensure the company attracts the best talent to perform the best they can.
Executives are given wide latitude in how they define the best financial interest of shareholders. Shareholders ultimately do have the ability to remove executives from their positions. This is supposed to be the default way of dealing with incompetent executives. As shareholders already possess the ability to fire a CEO at any time, there is a very high bar to clear before shareholders can also sue executives. It’s generally assumed if they really are doing that bad a job, you should just fire them.
Yes, that’s correct, it’s not an issue of legal liability, it’s an issue of their interests converging. the CEO holds stock, he is a stock holder, and the execs are stock holders, they don’t need any motivation to put the stocks first, they know where their interests converge, and precious few executives make more money in salary than they take away in stocks, in practicality, every corporation that offers stocks is stock focused, the is why we had daily and weekly meetings in retail stores on the absolute bottom of the ladder to talk primarily about stock prices, and why the main information displayed on price guns is the sale price/cost/ current quarter sales/last quarter sales/and last year to date quarter sales, and the sales numbers daily/monthly/quarterly are what you see posted around the office, it’s always about beating last year to date numbers, and last quarter’s numbers, and what always drove me fucking nuts is that the store made TWENTY TWO MILLION FUCKING DOLLARS in profit, but "your store is failing because you didn’t make twenty two million and one penny. they don’t care that we were making money hand over fist, because that’s not the game. that game is dead. don’t worry. they’ll still cut payroll, and you can’t like… spend that money or keep that money, but it doesn’t matter. it only maters if it makes the stocks move. it’s stocks all the way down. because that’s where the interests converge. also as a side note, golden parachutes are an internal security measure against hostile take over, it means if someone does successfully raid your business and performs a hostile takeover, they have to pay your executives staff when they fire them and loot the company more money than the company could be looted for. it’s never actually intended to be paid out.
By Brother in Christ, Paragraphs. Periods. Capitalization.
it’s why capitalism is over. they do not care about making a profit at all. they only care about the stocks. there is only one outcome to this approach, and that’s dissolving the company slowly until it fails because your willing to saw your legs off for a small spike in quarterly earning. You eventually run out of legs to saw off.
Coming from science to industry taught me one thing: numbers (and rationality as a whole) serves only one goal. And the goal is to persuade the opponents: colleagues, investors, regulators.
In this broken sense, your head of the team is right: hallucinations are acceptable if supervisors believe the output.
Sounds like the people who are realistic about AI are going to end up having a huge advantage over people who use it naively.
Like with statistics, there are a lot of tools out there that can handle them perfectly accurately, you just don’t want an LLM doing “analysis” because the NN isn’t encoded for that. Consider how often our own NNs get addicted to gambling while not being fully specialized for processing language. An LLM might not get caught up in a gambler’s fallacy, but that’s more on account of being too simple than being smarter.
I wonder if this will break the trust in MBAs because LLMs are deceptively incompetent and from the sound of this comment and other things I’ve seen, that deception works well enough that their ego around being involved in the tool’s development clashes with the experts telling them it’s not as useful as it seems.
You should have ask what would happen if the figures were wrong, let them make an excuse and then eat shit later. AI is taking our jobs. Never interrupt an enemy making a mistake.
I work in a regulated sector and our higher ups are pushing AI so much. And there response to AI hallucinations is to just put a banner on all internal AI tools to cross verify and have some quarterly stupid “trainings” but almost everyone I know never checks and verifies the output. And I know of atleast 2 instances where because AI hallucinated some numbers we sent out extra money to a third party.
If they have to verify the results every time, what is the point?
have some quarterly stupid “trainings”
Feeling this in my bones, executive just sent out a plan for ‘fixing’ the fact that the AI tools they are paying for us to use are getting roasted for sucking, they are giving the vendor more money to provide 200 hours of mandatory training for us to take. That’s more training than they have required for anything before, and using LLM tools isn’t exactly a difficulty problem.
Self solving problem!
My workplace (finance company) bought out an investments company for a steal because they were having legal troubles, managed to pin it on a few individuals, then fired the individuals under scrutiny.
Our leadership thought the income and amount of assets they controlled was worth the risk.
This new group has been the biggest pain in the ass. Complete refusal to actually fold into the company culture, standards, even IT coverage. Kept trying to sidestep even basic stuff like returning old laptops after upgrades.
When I was still tech support, I had two particularly fun interactions with them. One was when it was discovered that one of their top earners got fired for shady shit, then they discovered a month later that he had set his mailbox to autoreply to every email pointing his former clients to his personal email. Then, they hired back this guy and he lasted a whole day before they caught him trying to steal as much private company info as he could grab. The other incident was when I got a call from this poor intern they hired, then dumped the responsibility for this awful home grown mess of Microsoft Access, Excel, and Word docs all linked over ODBC on this kid. Our side of IT refused to support it and kept asking them to meet with project management and our internal developers to get it brought up into this century. They refused to let us help them.
In the back half of last year, our circus of an Infosec Department finally locked down access to unapproved LLMs and AI tools. Officially we had been restricted to one specific one by written policy, signed by all employees, for over a year but it took someone getting caught by their coworker putting private info into a free public chatbot for them to enforce it.
Guess what sub-company is hundreds of thousands of dollars into a shadow IT project that has went through literally none of the proper channels to start using an explicitly disallowed LLM to process private customer data?
My last job was with a very large west coast tech giant (its name is a homonym with an equally-large food services company). The mandatory information security training was a series of animated shorts featuring talking bears which you could fast-forward through and still get credit for completing. Not surprisingly, we had major data thefts every few months – or more accurately we admitted to major data thefts that often.
It reminds me of when the internet exploded in the 90s and everyone “needed” a website. Even my corner gas station had a web presence for some reason. Then with smartphones everyone needed their own app. Now with AI everyone MUST use AI everywhere! If you don’t you are a fool and going to get left behind! Do you know what you actually need it for? Not really but some article you read said you could fire 50% of your staff if you do.
I would quite honestly prefer every place to have their own web site instead of the ginormous amount of places that have facebook pages.

















