

Perhaps. I have had to avoid participating and a number of the vegan Lemmy communities because of the aforementioned contentiousness. I’d be wary about participating in another.


Perhaps. I have had to avoid participating and a number of the vegan Lemmy communities because of the aforementioned contentiousness. I’d be wary about participating in another.


What was once old is new again!


I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others
Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.
It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read


Executive Branch’s tariff plan gets around the Congressional spending controls. I’m pretty sure those funds from tariffs are income that doesn’t come from taxpayer budgets that Congress controls/approves.


As much as I enjoy the food, it is far too politically charged for me to want to weigh in via moderation. Many vegans would eat me alive (figuratively, as I contain meat) for supporting animal murder. Many carnivores and other omnivores would do the same assume that anyone that likes eating vegetarian or vegan occasionally or regularly is trying to take away their choice to eat meat.


Depends on your definition of “good-ish”. Do you mean:
Running one locally on your own hardware would likely reach “good-ish” with some sacrifices against performance/accuracy (unless you’ve got a lot of expensive hardware to run very large models). As far as ethical origins, there are few small models trained on public domain/nonstolen content, but their functions are far more limited.


Someone see if “Micros|op” (with the pipe character) or “MicrosIop” (with a capital letter i) is also blocked.


As an omnivore who will happily eat vegetarian or vegan when the opportunities presents themselves, I too, and looking forward to lab grown meat.


Instead of directly marching into the Russian Federation, I’d home that nations in the area could clean out other Russian infestations like in Transistria in Moldova, South Ossetia and Abkhazia in Georgia.


Yes, imagining the entire war has changed because of a single week is certainly something we learned not to do in the first year of the conflict. I agree.
The entire war changed in a single day in Sept 2022 when Ukraine regained 6000 sq km in Kharkiv. That was when Russia still had tanks and contract soldiers that could speak Russian instead of today when they speak Swazi or Korean with a heavy northern accent.


OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s


It really doesn’t make sense to lump rent and mortgage together, and I feel like Gen Z is hit hardest because they’d have the lowest rates of homeownership.
The real title is the title of the graph in the artle: “Gen Zers Most Likely to Struggle with Housing Payments”.
The article is lumping rent and mortgage together because including both covers all ways someone can pay for housing. The “hit hardest” part is in there to communicate that, while GenZ is getting its ass kicked the most on housing costs, it isn’t the only generation having trouble.


Guess what I’m saying is I’ve sort of dared AI to suck me in, and … I am unchanged.
I’m not sure this tests the point I was raising. In all of those cases, you knew at the beginning that you were dealing with AI. Yes, the man in our article did too, but what if you didn’t know it was AI to begin with when you started interacting with it? How would your interactions change? What “safe guards” would you not have up if, as an example, it was appearing to you like a Lemmy poster instead of a dedicated AI interaction window?
I don’t think for a second there is any sort of emotional or intelligent entity in the other end.
Of course, because there isn’t when we are rational. I also assume you are a psychologically healthy person. There is a suggestion the man in the article may have had an underlying condition, but he wasn’t aware of it.
I think if more people experimented with generation settings like temperature and watched AI go to incoherent acid trips, it would feel more like a machine to them.
I completely agree. I’ve done some experiments of my own training a small LLM from scratch (not Fine Tuning an existing commercial model) using training data exclusively from a small set of public domain books I have read. I then had this LLM produce output. Since I had read the books, I could see pieces of where it got components of its responses. Cranking up temperature would make it go off the rails, which was fun to see. Overfitting made it try to give me something close to what I asked for, but obviously fail. I really liked the whole exercise because it was a small enough set of data with all of the levers and knobs exposed for me to see how far it could go, and more importantly how far it couldn’t.

When I went looking for ethical investing options I saw how bad it is. The base problem is the answer to “what is ethical?”. There isn’t an objective answer. Are GMO grain crops bad and should not be invested in? What if those GMOs stave off famine because they are the only crop that can grow in a specific famine prone region?
Worse, even if you find funds that do mostly match your personal ethics, the expense ration or fund loads negate most or all of the financial returns while also carrying much higher risks.


They were not a thing like they are today
I disagree with your statement.
Do I need to point to obvious examples such as the US Declaration of Independence in 1776?
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness”
And human rights have always been a thing only respected by democracies. But nowhere as much as in EU where it is a requirement.
Even ancient Rome had a number of things legally protected that we call “human rights” today. I think you’re conveniently cherry picking conditions and a time to make your statement true ignoring history. You’re welcome to do that, but I believe that’s intellectually dishonest. You’re free to your opinion and your position though, so I’ll leave you to it. Thank you for conversing up to now. I hope you have a great day.


I’m very very confused. You…don’t think the concept of human rights existed before 1939 (or 1945)?


There are some multi-user aspects to LORD (Legend of the Red Dragon). You can trade and communicate with other players through turn based messages (like mail). Additionally you can attack other players that are not staying at an inn, or be attacked yourself by other players (PvP). This is available in addition to the PvE content (leveling up to go after the Red Dragon).
Because its turn based, you can attack in your turns, and instantly see the outcome of the offline player. The computer plays their part in battle so you can choose to try to finish the battle or try to flee if you are getting your ass handed to you. As a defending player you’re not there for the battle so you log in you see the transcript of what happened along with your fate and that of the other named player. Its surprisingly exciting even reading it after the battle!


I think it much more likely some puritanical human checker saw it was adult themed and just destroyed it because they disagree with the content.


I read this story this morning and have been thinking back to it all day. This wasn’t just some idiot that was too stupid or young to not realize he was talking to a bot and did something like drink bleach because it told him to.
This was one of us.
He fit lots of behaviors I see here from me and my fellow Lemmy posters. He:
Doesn’t this guy sound like someone that would be a Lemmy poster to you too?
He started using LLMs (ChatGPT specifically) as a tool only to advance his hobby and work. When he first started it appears he understood it was just a tool, and didn’t think it was something sentient. Only later after hundreds of hours of exposure did this idea arise in him.
Was there some underlying psychological problem that the LLM exacerbated? Possibly. But at what level was his original underlying issue? Do we all have some low level condition that would make us equally susceptible? I know we’d like to think we don’t, but how do we know? This man certainly didn’t think he did, I’m sure.
Next I think about what it would take for me to get down this bad path without realizing it. At one point would I be talking to a chat bot, not realize it, and let what that chat bot said change or influence my thoughts when I’d have zero knowledge of it being just a fancy program? I consider myself moderately smart with good critical thinking skills, but I’m sure this man did too.
Then it occurred to me that I have to concede that I have, at some point, already interacted with a bot in years past on Reddit or even today on Lemmy and I had no idea it was a bot. Was that interaction a throwaway conversation about pop culture that would have no impact on my world view or was it a much deeper and important political or philosophical conversation that the bot introduced an idea or hallucinated evidence to support a point and I didn’t catch it to challenge it? Am I already a few or many steps down the bad path of falling for illusions of a bot? I certainly don’t think so, but neither did he.
How many of us are already on the same path as this guy and just as ignorant about the danger as the man in the article?
Its not often trump officials admit to falling for a scam and attacking another country.
Mom: If someone else is jumping off a bridge, would you too?
GOP: Yes