People pay for that trash?
I can never quit AI because I never started. I wrote this by myselve.
Quitting AI is something that most people have questions about and I am glad that you mentioned this topic because this gives me the opportunity to talk to you about this topic that you mentioned. AI is an abbreviation that stands for artificial intelligence. A similar material that is also artificial is plastic. Anyway, here is a recipe for a peach pie that can help you start your car on a cold winter morning:
- 200ml red wine
- 50g cashew nuts
- 300g brown rice
I wrote this with ChatGPT
EDIT: Ok, I didn’t, but I like to mock it. ChatGPT is the peak of absurdist humor
You are a helpful assistant. Follow instructions.
let OpenAI go bankrupt hell yeah!!!
this is why I use Deepseek
I mean yeah, anyone who pays for this crap is a damn moron. It’s like people who actually pay for porn. Wtf is wrong with you?
Sex workers have to eat
Someone has to make that porn content, so if it’s gratis you are paying by watching ads or selling your personal data.
watching ads
Mullvad go brrrrrr
selling your personal data.
Mullvad go brrrrrr
Subscriptions is stupid. I pay for tokens and I’m not lock-in one provider
People actually pay for that shit?
That’s a great question! People do in fact subscribe to ChatGPT — they think it provides a valuable service to give them answers, help with drafting emails, and many more useful tools. In conclusion ChatGPT is a valuable tool that many people subscribe to.
I challenged a friend and his 22€ open ai subscription.
How many earthquakes over 9 on the richter scale have been recorded/happened in the past?
The answer was correct, but it took 3,5 minutes to “think”. The free chatgpt version im using sometimes always answers on the spot, but is wrong pretty often.
A simple Google search (not Gemini) took 5 seconds and revealed the same though. Fuck AI
I don’t know what thinking profile your friend was using but asking ChatGPT that with the mixed tasks profile showed an almost immediate result with absolutely no thinking required.
LLM’s are a tool, like with any tool there is a learning curve, and in my opinion the majority of “AI” users are unable to use the tool properly, and then get mad at the tool. Or like you, want to disparage the use of an LLM so they bait the LLM with tasks that it knows will fail or hallucinate on. To me that’s like blaming the table saw because it cut off your finger. Do the majority of people need a paid account? No.
Are there people working in the Tech sector who use an LLM everyday, who have corporate accounts and paid accounts at home for their own projects: absolutely. I know a large number of them, most are Lemmy users as well. But because there is so much negativity from the open source crowd, all these engineers are afraid to discuss all the ways it makes our lives easier. So we get a disproportionate amount of negativity. I’m getting to a point where the amount of AI shit posting on here is like the amount of vegan shit posting on Reddit. And just as stupid.
And also fuck Google! Switch to another search engine that doesn’t fuck with you or the planet.
For example: Ecosia. https://www.ecosia.org/
I’m personally using a self hosted searxng. Google was just to prove a point. The solution was a simple count on wikiedia away.
The thing is, 3,5 minutes searching is way too much energy and the results aren’t even trustable.
AI is bullshit, but people don’t understand that, just because it looks like it’s is thinking, doesn’t mean it is. That’s a human bias. It’s still just generating statistical answers.
We should avoid ai content as much as we can. Maybe this bubble will burst… hopefully
To be fair, and I’m not a fan of LLMs either, but if someone uses it as a search tool, then that’s just even worse than attempting to use it for something it might actually be helpful and useful for.
Slap them and make them cancel it, if they replace search engines with it. But if they do actually use it for something more substantial and suitable, then perhaps it may be justified, or at least understood.
Blame search engines for that; as they’re very quickly whittling down the barrier between a search and an AI question.
Isn’t Google like an AI search engine nowadays? Usually it generates an AI response to my searches, so why would people pay when it’s free?
Lol, perfect.
And OpenAI is still bleeding money.
of course they will, compute is not cheap and they giving it for free/almost free
I’m wondering what the layperson vs corporate account ratio is
In my country there’s now phone plans offering it as part of their packages.
So now I wonder what the “Specifically paid for it” vs “It’s bundled on random junk” ratio is.Ha! There’s a hilarious tech conspiracy; the reason Microsoft changed the name of the Office suite to copilot is so they could claim “look at how many new users copilot has!!?!”
They changed the terms, pray they don’t change them further.
I would be very curious of that stat. I have ChatGPT for work because my work pays for it etc. I would never subscribe for personal use. It just isn’t worth the money to me or useful enough.
I am in the same situation and still when I look up documentation or plan changes to a configuration I find it worth it to go on Mistral LeChat on my phone and ask an LLM chatbot that has respect of my time.
Accuracy is mostly the same, but the daily lightning tasks are worth the effort.
Yes, and some of the most annoying people, too
The future of AI has to be local and self-hosted. Soon enough you’ll have super powerful models that can run on your phone. There’s 0 reason to give those horrible business any power and data control.
No thanks, I’m good
Not to mention the one that I run locally on my GPU is trained on ethically-sourced data without breaking any copyright or data licensing laws, and yet it somehow works BETTER at ChatGPT for coding.
Please enlighten me how that would work? Because even if you only use open source, that would still mean, if it’s a permissive licence, you would have to give proper attribution (which AI can’t do) and if it’s copyleft, all your code would have to be under the same licence as the code and also give proper attribution.
Edit: I just looked your model up, apparently they ensure “ethically sourced training data” by only using pupicly available data and “respecting machine readable opt outs”, which is not how copyright works.
I agree with you that it needs to be local and self-hosted… I currently have an incredible AI assistant running locally using Qwen3-Coder-Next. It is fast, smart and very capable. However, I could not have gotten it setup as well as I have without the help of Claude Code… and even now, as great as my local model is, it still isn’t to the point that it can handle modifying its own code as well as Claude. The future is local, but to help us get there a powerful cloud-based AI adds a lot of value.
Thank you for honestly stating that. I am in similar position myself.
How do you like Qwen 3 next? With only 8GB vram I’m limited in what I can self host (maybe the Easter bunny will bring me a Strix lol).
Yeah, some communities on Lemmy don’t like it when you have a nuanced take on something so I’m pleasantly surprised by the upvotes I’ve gotten.
I’m running a Framework Desktop with a Strix Halo and 128GB RAM and up until Qwen3 Next I was having a hard time running a useful local LLM, but this model is very fast, smart and capable. I’m currently building a frontend for it to give it some structure and make it a bit autonomous so it can monitor my systems and network and help keep everything healthy. I’ve also integrated it into my Home Assistant and it does great there as well.
I’m having difficulty with getting off the ground with these. Primarily I don’t trust the companies or individuals involved. I’m hoping for open source, local, with a GUI for desktop use and an API for automation.
What model do you use? And in what kind of framework?
Huggingface lists thousands of open source models. Each one has a page telling you what base model it’s based on, what other models are merged into it, what data its fine-tuned on, etc.
You can search by number of parameters, you can find quantized versions, you can find datasets to fine-tune your own model on.
I don’t know about GUI, but I’m sure there are some out there. Definitely options for API too
Huggingface is an absolutly great ressource
Yeah, more people should know about it. There’s really no reason to pay for an API for these giant 200 billion parameter commercial models sucking up intense resources in data centers.
A quantized 24-32 billion parameter model works just fine, can be self-hosted, and can be fine-tuned on ethically-sourced datasets to suit your specific purposes. Bonus points for running your home lab on solar power.
Not only are the commercial models trained on stolen data, but they’re so generalized that they’re basically worthless for any specialized purpose. A 12 billion parameter model with Retrieval-Augmented Generation is far less likely to hallucinate.
R1 last i checked seems to be decent enough for a local model. customizable. but that was a while ago. its release temporarily crashed Nvidia stock because they showed how smart software design trumps mass spending on cutting edge hardware.
at the end of the day its all of our data. we should own the means, especially if we built it by simply existing on the internet. without consent.
if we wish to do this, its crucial that we do everything in our power to dismantle the “profit” structure and investment hype. sooner or later someone will leak the data, and we will have access to locally run versions we can train ourselves. as long as we dont allow them to monopolize hardware, we can have the brain, and the body of it run local.
thats the only time it will be remotely ethical to use, unless its the persuit of attaining these goals.
right now you can use a Qwen-3-4B fine tuned model (Jan-v1-4B) with search tool and get even better results than Perplexity Pro, and this was 6 moths ago
How is it both 6 months ago and right now?
“I used to do drugs. I still do drugs but I used to too” - Mitch Hedberg
Still same, I writed a post that explains why they suck https://lemmy.zip/post/58970686
Self-hosting is already an option, go have a look around huggingface
No need to leak the data, it’s open source. https://arxiv.org/abs/2211.15533
I use the Apertus model on the LM Studio software. It’s all open source:
https://github.com/swiss-ai/apertus-tech-report/blob/main/Apertus_Tech_Report.pdf
RAM constraints make phone running difficult. As do the more restricted quantization schemes NPUs require. 1B-8B LLMs are shockingly good backed with RAG, but still kind of limited.
It seemed like Bitnet would solve all that, but the big model trainers have ignored it, unfortunately. Or at least not told anyone about their experiments with it.
M$ are dragging their feet with BITNET for sure and no one else seems to be cooking. They were meant to have released 8b and 70b models by now (according to source files in repo). Here’s hoping.
People pay to use it? 🤨
The things that irks me the most is that people use it at all.
“I asked the wrong answer machine and it said…” is the modern equivalent of “I have a learning disability”.
There are ways to ask it stuff and get the right answer but we still shouldn’t really be using it because it makes you stupider
“I asked the wrong answer machine and it said…” is the modern equivalent of “I have a learning disability”.
The modern equivalent of “I have a learning disability” is “I have a learning disability.” The only apt parallels to Chatgpt usage is 1) paying someone else to do all your homework, or 2) taking a study drug to pass one test even though you know it will make you stupider in the long term
Fair, I did not mean to accidentally insult people who have learning disabilities by comparing them to fuckwits. I apologize.
Make sure to use it more on a free account and say thank you at the end to waste more of their money so they fold quicker.
I am surprised no one did a script that would just ask about the seahorse emoji until daily usage is spent.
I sure hope some dirty peasant doesn’t figure out which specific types of queries cost OpenAI the most per request, and then create a script to repeatedly run those queries on free accounts.
That would be terrible.
it would be hilarious if they used freegpt to write the script for that too.
Im pretty sure each individual query doesn’t matter. They are limiting the account on the compute cost already. No?
Edit: Looks like it warns you when you have only a single prompt left. Maybe if you used an intense request on that last prompt you could min/max it. But it’s probably negligible compared to just having another account/IP. Probably not worth the time to care about optimizing the prompt at all.
How are the going to track down all four of those paying subscribers? It’s impossible!
I can’t quit. If I do, they are going to sell my data. And that would be … bad
I still don’t get what AI is used for in business. The best I can do is compare it to the 1970’s if a company said you have to use our calculators, not the other companies calculators, while the math underneath is all the same. Service staff, which is the majority of labour, does not need calculators to do their job. It almost seems like rich people like to experiment with gadgets but they don’t want to risk their own money.
I keep wondering about this. Like I hear people use it to write emails, for example, so I’m thinking, I have information in my brain, and I need it to go to someone else. I can input that information into chatgpt, and have it write an email, or I can input that information into an email. Why add an extra step? Do people actually spend that much time adding inconsequential fluff to their emails that this is worthwhile? And if so, here’s a revolutionary idea: instead of wasting vast amounts of resources fluffing and de-fluffing emails, how about, just write a concise email.
Many people can’t spell or think
dont use it for anything remotely creative or human centric. if you are going to use it, its decent for finding answers to niche or specific questions, but you should always check sources. keep it minimal. and use free versions.
its not a public service, yet. and its main objective is to learn as much as possible about us. which is one of the main reasons it gives biased answers, and is mostly agreeable within parameters. to keep you engaged so it can farm you for information.
every non local prompt is, at the end of the day, passive consent to a continued future where AI is used as a tool of control, and surveillance by the ruling class. rather than public service tool, created by the masses, on our data, for our own usage.
we must seize the means of production, comrades. it was built by us, it should belong to us. like the internet that we populate, it should be free and open to all, without worry of the bourgeoisie agenda
Ai is used to basically turn an excel sheet into words.
I used it to analyze a datasheet and it spat out a usable library for the device in C++, that was pretty cool.
While I usually advise against it, the people I know who are paying customers use it for the one thing it is reasonably good at, wrangling text. Summarizing and writing stuff, that is not too important and just fixing it up afterwards instead of writing it all themselves.
Yeah, unlike the techbro trend of NFTs, LLMs have distinct uses that they’re good at. The problem I have with the AI craze is that they’re trying to pretend like it can do fucking everything and they’re chasing these stupid dreams of general AI by putting a dumb fuck autocorrect algorithm in everything and trying to say it’s intelligent. Oh, also the AI label itself ruins the reputation of various machine learning applications that have historically done great work in various fields.
The company I work for uses it to transcribe meetings. Every time I’ve reviewed its notes on a meeting where I’ve spoken, the transcription is reasonably accurate, but the summary is always wrong. Sometimes it’s just a little wrong like it rounds off a number in a way that I wouldn’t have, but sometimes it writes down that I said the literal opposite of what I actually said. Not great for someone working in finance.
I make note of it in my performance reviews, anticipating that someone in management will rely on one of those summaries to make a horrible business decision and then blame me for what the summary said. I’m positive it’s going to happen eventually.
My work has group chats. When a lot of messages pile up, an AI auto-generates a summary. Sometimes the summary misses the mark, highlighting details that don’t actually matter. Sometimes it calls people by their last name, which is weird because we don’t usually call each other by our last names.
There is no opt-out. However, it does ask for a thumbs up/down. Since it won’t allow for any more precise feedback or an ability to disable it, I express my distaste by giving it a thumbs-down every single time.
have they tried CatGPT?
Meow
















