

even in finland burgerking employees had strange wibe to them, i dont like that place.


even in finland burgerking employees had strange wibe to them, i dont like that place.


well, i keep tons of tabs open AND use a lot of bookmarks


yeah, i think that is because it knows how research papers should look like and how references look like, but since it has no reasoning, it will just do whatever. I used gpt to diagnose my problem with internet getting cut off and it determined its because of drivers, which sounds reasonable. Then it suggested that i download the latest ones and it did link to correct website but it also tried to download stuff that doesnt exist. No idea how it determined the version numbers and such, maybe based on earlier patterns.
But it isnt making stuff up, its just outputting the best data it can based on what it has been trained with and what it can find. Its not lazyness but just doing what its doing. Just like code that isnt doing what you want it to do isnt doing it out of malice but because there is a mistake in the code.
maybe he just didnt get the point of the story or something. I dont think you can act well if you cant get into the character


but they do have access to internet? At least gpt can search based on the text it outputs when its processing the query


joo, olin itsekkin tuossa vaiheessa jonkun aikaa sitten. nykyään en vain jaksa välittää enää, mutta olisin silti valmis välittämään jos ei tarvitsisi yksin olla sen asian kanssa ja voisi jotain konkreettista tehdä asioiden hyväksi


en siis kohdistanut sitä sinulle henkilökohtaisesti mitenkään :D enemmän tarkoitin yleisesti mitä tuosta voi seurata.
Ja viestin tarkoitus oli että miksi ei todellakaan pitäisi hyväksyä tätä ja että mitä siitä voi seurata jos hyväksyy


if the hallucinations are result of something actually happening in the background, that would be quite interesting. It would also be very bad for rest of us since it might mean the billionaires who own the damn things would be in position to get even worse deathgrip on our world. If they ever manage to create agi, the worst thing that could happen isnt that it breaks free and enslaves humanity but that it doesnt and it helps the billionaires enslave us further and make sure we cant ever even think about fighting back.
But i think the hallucinations are based on incorrect information in the training data, they did train it from stuff from reddit too. Any and everything will be considered true, but if 99% of the data says one thing and 1% says another, then i think it will reference that 99% more often but it cant know that the 1% is wrong, can even real humans know it for certain? And since it cant evaluate anything, there might be situations where that 1% of data might be more relevant due to some nebulous mechanism on how it processes data.
llms have been made to act extremely helpful and subservient, so if they actually could “think” wouldnt they factcheck themselves first before saying something? I have sometimes just asked “are you sure?” and the llm starts “profusely apologizing” for providing incorrect information or otherwise correcting itself.
Though i wonder how it would answer if it truely had no initialization querys, as they have same hidden instructions on every query you make on how to “behave” and what not to say.

can’t wait to be eaten alive by swarm of mecha insects


eli nuo käyttää jotain amerikkalaista järjestelmää automatisointiin, käytät itse eri palveluja kuten discord, google ja ties mitä jotka kerää sinusta tietoa. Se järjestelmä todennäköisesti käyttää hyväksi kaikkea sitä tietoa mitä sinusta on urkittu. -> annat vaikka discordille kasvokuvan tunnistautumista varten ja se menee sieltä ties minne tietokantaan missä sitä verrataan että näytätkö joltain terroristilta mistä niillä on kuva -> näytät 67% tai jotain. Tai sinusta on joissain muissa tiedoissa että harrastat jotain muuta kuin työnhakua 100% ja et ole selannut pelkästään siihen liittyviä nettisivuja.
Jokatapauksessa, kaikki sinusta kerätty tieto voi tuon myötä muuttua hyvin konkreettiseksi uhaksi.
Puhumattakaan siitä että tuo systeemi luovuttaa myös kaikki sinun henkilökohtaiset tiedot amerikkaan myös täältä päästä.


no, its incapable of making choices because there is nothing there to make the choices. Its just fancy way of interacting with the data it has been trained with. Though i suppose if there was a way to let llm function “live” instead of only by responding to queries, it could be possible to at least test if it could act on its own, but i dont think it can -> we would know by now because it would be step closer to agi, which is basically the holy grail for these kind of things. And equally possible to get, i think.
You can literally make the llm say and do anything with right kind of query, this is also why its impossible to make them safe. Even though you can’t directly ask for something forbidden, with some creativity you can bybass the initializations the corpos have put in. Its not possible for them to account for every single thing and if they try they will run out of token space.
The whole “ai” term is just corporations perpetuating a lie because it sounds impressive and thus makes people want to give them more money for their bullshit.
too bad there isnt a third option, like founding a new party or several.


there is no ai, only largelanguagemodel that has been trained on data. The data it has been trained suggests this is the best idea. llm cant evaluate the data its trained on so anything you put in will be equally valid. I give it that its really impressive how they can output the training results in such coherent way that can be kind of “conversed” with, but there is no will or intelligence behind it.
This is also why corporations insisting on putting them everywhere is quite horrible security issue -> you can jailbreak any llm and tell them to do anything. So this has enabled all kinds of stupid vulnerabilities that exploit this. Now you can even send someone malicious google calendar invites that makes gemini do bad shit to your systems its connected to.


well, except piefed also has some quite bad issues: https://sopuli.xyz/post/40286456 this post discusses them


ffs, why cant there be anything nice in this world… everything has to have some downside or looming problem. How can i even recommend lemmy to people if there is a problem of " yea, its main developers are crazy tankies, but maybe its going to be okay?"
Though by now i hope there are enough interested developers in lemmy in general that hopefully it can be branched if things get too bad.


when you are aware of the things company is doing and still continue buying from them. Though if the company has monopoly and you are dependent of the product, then its a bit different.


yeah, a bit too extreme take from me. I’m just so annoyed about people who apathetically keep supporting things that make our world worse or that are produced from suffering of others.


I havent seen any evidence or mentions of it
this is what you get when you elect piss hitler. Next is annexation of greenland and after that likely ww3 if people continue to be apathetic.