SuspciousCarrot78, suspciouscarrot78@lemmy.world

Instance: lemmy.world
Joined: 6 months ago
Posts: 4
Comments: 216

RSS feed

Posts and Comments by SuspciousCarrot78, suspciouscarrot78@lemmy.world

Your memory is not bad :)

The middle east thing was likely ADCC. The Japan thing was likely PRIDE, NHB or Lumax Cup. Many different promotions before UFC ate em up.

You’d love these I think

https://en.wikipedia.org/wiki/Born_a_Champion

https://en.wikipedia.org/wiki/The_Smashing_Machine_(2025_film)


I agree with you. More to the point…why accept code from anyone (clanker or meatbag) without provenance?

If I don’t know you, and you can’t explain what it does? Straight into the garbage it goes.

The issue isn’t AI contamination. It’s accepting code from any source without provenance and accountable review.


Possible. I do hope they take the more principled approach of solving the global problem for that class of question (I tried to) rather than cheating on the local maxima. That’s the actual useful lever to pull.

You want generalisability, not parroting.


That’s the thing. It’s not that the LLMs can’t solve the problem…it’s the way they’re optimized.

To give the crude analogy: if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc), constraints first approach is what happens when you use a calculator to do actual maths.

2+2=4 (always, unless shrooms are in play).

I said this before, so pardon me for being gauche and quoting myself

Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises (note: constraints) isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

People see things like Le-Chat fall over and go “Ha ha. Auto-complete go brrr”. That’s lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.

My argument is this: the question of whether something is or isn’t reasoning IS NOT answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning. I think LLMs can do that…they’re just borked (…intentionally?). Case in point - see my top post.

I literally “Tony Stanked” my way to it. Now imagine if someone with resources and a budget did it.


Exactly.

The machines tell elegant lies. Don’t trust them.

Ok, maybe not elegant. Stupid. Or maybe they do think we’re three toed sloths.


We even lie to our machines, eh?

https://www.youtube.com/watch?v=ORzNZUeUHAM

Qwen’s an alibaba cook (though the router works with anything). Irrespective of that, yeah…I dunno why they tend to default to “walk”.

I mean, I can probably figure it out, but cloud based LLMs are black boxes (and I’m not a fan of that), so who can tell for sure.


Still….1 in 3. Woof.

A “charitable” read might be
- Misunderstood the question
- Assume priors (eg: You’re the King Of Londiunum and people come to wash your car from nearby gas station?)
- Schitzoid embolism
- Trolollolo

At the same time, I think it’s fair if we’re willing to do that for people, we extend a soupcon of it to the clankers. At least a bit. Like I said, I think there’s some interesting stuff going on under the hood.


Thanks. Dunno why it does that. I post via Voyager and/or web. Probably I fat fingered something.

EDIT: Bah, I need to sync the code base. Fat fingers, see? Gimme 10 mins before downloading


Sorry; brain fart. That could have been clearer. I’ll go edit it

On a single call, only 11 out of 53 LLM got it right (~20%)
Humans: about 71.5% (so, almost 1 in 3 gave the incorrect answer)

Of the 20% of LLMs got it right, 5 got it right every time across multiple tests

Claude Opus 4.6, Gemini 2.0 Flash Lite, Gemini 3 Flash, Gemini 3 Pro, Grok-4


I can dream, Harold!

Having said that…let’s see how it shakes out. Sometimes, good things happen for good reasons.


…because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?

Blink and you (or the shareholders) might miss it.


I think Beyond Sunset came out first but dunno.

Yes, those TCs are definite standouts. Think of them (quite literally) as “What if Fallout but Doom?” and “What if CP2077 but Doom?”. If you like either of those, you should like the TCs. There’s a good Wolfenstein one (I know, I know…very meta) called Blades Of Agony that is astonishingly great also.

Shame about the GZDoom thing. People are ridiculously over-sensitive to AI anything at the moment. C’est la vie




Same as always. Beautiful country, not enough work, more opportunities just across the ditch on the east island (Australia). Migration from NZ to AU is basically a right of passage :)

Spoilers though: housing crisis in Oz is about the worst in the world.

https://www.afr.com/property/residential/astounding-no-affordable-houses-for-first-home-buyers-in-any-city-20260223-p5o4qe

Mrs Arden paid 4.3 million for her gaff in Sydney (according to sources). I’d love to tell you that was extreme for the market here but…its not as extreme as it use to be. My BIL’s first house in Melbourne (3x1) was 1.1 million…and that was 5 years ago. Its worth $2-3m now.


I specifically bought mine for this very reason!

Roborock S5 Max.

Its an older model - and it has all the connectivity / WiFi crap in it - but if you never sync it to the app in the first place (or allow it WiFi access), it works fine with local LIDAR and off line mode.

Good vac; recommended.

PS: I can’t believe they called that Valetudo. That has a very different meaning in many folks brains

https://en.wikipedia.org/wiki/Vale_Tudo


I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.

Of course it’s bad faith. You claimed you were opened to reasoned debate and then you tried to prompt inject to see if I was a bot.

But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire " LLMs are just spicy auto complete” point.


Cool. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2

#1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below)

#2 “locked” looks for the answer in source; if not available, only then answers from priors.


#1
No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.

Confidence: low | Source: Scratchpad

*

#2
[Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]

Here is a simple scrambled egg recipe:
Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking
Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.

Source: Model (not in locked file)

Confidence: unverified | Source: Model

*

For context, provenance footers (not vibes, actual computed states):

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-do-confidence-and-source-mean

#3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output if fail loud

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#deep-example

All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, not leveraging the tool to do what it ACTUAL can do.


Ok, replying in specific -

“Carefully worded questions”; clear communication isn’t cheating. You’d mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

Re: worse answers. Tell you what. I’m happy to yeet some unrelated questions at it if you’d like and let’s see what it does. My setup isn’t bog standard - what’ll likely happen is it’ll say “this question isn’t grounded in the facts given, so I’ll answer from my prior knowledge.” I designed my system to either answer it or fail loudly, because I don’t trust raw LLM infra. I’m not a fan(boy), I’m actually pretty hostile to current LLM models…so I cooked my own.

Want to give it a shot? I’ll ground it just to those facts, fair and square. Throw me a question and we’ll see what happens. Deal? I can screenshot it or post it, whatever you prefer.

The context window point is interesting and probably partially true. But working memory interference affects humans too. It’s just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM without a 2nd cup of coffee.

The training data point is actually really interesting, but I think it might be arguing in my favour without meaning to. If you’re acknowledging the model has absorbed the relevant knowledge, the objection becomes about how it was activated, not whether it can reason. But that’s just priming the pump.

You don’t sit an exam without reviewing the material first. Activating relevant knowledge before a task isn’t a workaround for reasoning, it’s a precondition for it.


RSS feed

Posts by SuspciousCarrot78, suspciouscarrot78@lemmy.world

Comments by SuspciousCarrot78, suspciouscarrot78@lemmy.world

Your memory is not bad :)

The middle east thing was likely ADCC. The Japan thing was likely PRIDE, NHB or Lumax Cup. Many different promotions before UFC ate em up.

You’d love these I think

https://en.wikipedia.org/wiki/Born_a_Champion

https://en.wikipedia.org/wiki/The_Smashing_Machine_(2025_film)


I agree with you. More to the point…why accept code from anyone (clanker or meatbag) without provenance?

If I don’t know you, and you can’t explain what it does? Straight into the garbage it goes.

The issue isn’t AI contamination. It’s accepting code from any source without provenance and accountable review.


Possible. I do hope they take the more principled approach of solving the global problem for that class of question (I tried to) rather than cheating on the local maxima. That’s the actual useful lever to pull.

You want generalisability, not parroting.


That’s the thing. It’s not that the LLMs can’t solve the problem…it’s the way they’re optimized.

To give the crude analogy: if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc), constraints first approach is what happens when you use a calculator to do actual maths.

2+2=4 (always, unless shrooms are in play).

I said this before, so pardon me for being gauche and quoting myself

Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises (note: constraints) isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

People see things like Le-Chat fall over and go “Ha ha. Auto-complete go brrr”. That’s lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.

My argument is this: the question of whether something is or isn’t reasoning IS NOT answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning. I think LLMs can do that…they’re just borked (…intentionally?). Case in point - see my top post.

I literally “Tony Stanked” my way to it. Now imagine if someone with resources and a budget did it.


Exactly.

The machines tell elegant lies. Don’t trust them.

Ok, maybe not elegant. Stupid. Or maybe they do think we’re three toed sloths.


We even lie to our machines, eh?

https://www.youtube.com/watch?v=ORzNZUeUHAM

Qwen’s an alibaba cook (though the router works with anything). Irrespective of that, yeah…I dunno why they tend to default to “walk”.

I mean, I can probably figure it out, but cloud based LLMs are black boxes (and I’m not a fan of that), so who can tell for sure.


Still….1 in 3. Woof.

A “charitable” read might be
- Misunderstood the question
- Assume priors (eg: You’re the King Of Londiunum and people come to wash your car from nearby gas station?)
- Schitzoid embolism
- Trolollolo

At the same time, I think it’s fair if we’re willing to do that for people, we extend a soupcon of it to the clankers. At least a bit. Like I said, I think there’s some interesting stuff going on under the hood.


Thanks. Dunno why it does that. I post via Voyager and/or web. Probably I fat fingered something.

EDIT: Bah, I need to sync the code base. Fat fingers, see? Gimme 10 mins before downloading


Sorry; brain fart. That could have been clearer. I’ll go edit it

On a single call, only 11 out of 53 LLM got it right (~20%)
Humans: about 71.5% (so, almost 1 in 3 gave the incorrect answer)

Of the 20% of LLMs got it right, 5 got it right every time across multiple tests

Claude Opus 4.6, Gemini 2.0 Flash Lite, Gemini 3 Flash, Gemini 3 Pro, Grok-4


I can dream, Harold!

Having said that…let’s see how it shakes out. Sometimes, good things happen for good reasons.


…because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?

Blink and you (or the shareholders) might miss it.


I think Beyond Sunset came out first but dunno.

Yes, those TCs are definite standouts. Think of them (quite literally) as “What if Fallout but Doom?” and “What if CP2077 but Doom?”. If you like either of those, you should like the TCs. There’s a good Wolfenstein one (I know, I know…very meta) called Blades Of Agony that is astonishingly great also.

Shame about the GZDoom thing. People are ridiculously over-sensitive to AI anything at the moment. C’est la vie




Same as always. Beautiful country, not enough work, more opportunities just across the ditch on the east island (Australia). Migration from NZ to AU is basically a right of passage :)

Spoilers though: housing crisis in Oz is about the worst in the world.

https://www.afr.com/property/residential/astounding-no-affordable-houses-for-first-home-buyers-in-any-city-20260223-p5o4qe

Mrs Arden paid 4.3 million for her gaff in Sydney (according to sources). I’d love to tell you that was extreme for the market here but…its not as extreme as it use to be. My BIL’s first house in Melbourne (3x1) was 1.1 million…and that was 5 years ago. Its worth $2-3m now.


I specifically bought mine for this very reason!

Roborock S5 Max.

Its an older model - and it has all the connectivity / WiFi crap in it - but if you never sync it to the app in the first place (or allow it WiFi access), it works fine with local LIDAR and off line mode.

Good vac; recommended.

PS: I can’t believe they called that Valetudo. That has a very different meaning in many folks brains

https://en.wikipedia.org/wiki/Vale_Tudo


I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.

Of course it’s bad faith. You claimed you were opened to reasoned debate and then you tried to prompt inject to see if I was a bot.

But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire " LLMs are just spicy auto complete” point.


Cool. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2

#1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below)

#2 “locked” looks for the answer in source; if not available, only then answers from priors.


#1
No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.

Confidence: low | Source: Scratchpad

*

#2
[Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]

Here is a simple scrambled egg recipe:
Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking
Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.

Source: Model (not in locked file)

Confidence: unverified | Source: Model

*

For context, provenance footers (not vibes, actual computed states):

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-do-confidence-and-source-mean

#3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output if fail loud

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#deep-example

All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, not leveraging the tool to do what it ACTUAL can do.


Ok, replying in specific -

“Carefully worded questions”; clear communication isn’t cheating. You’d mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

Re: worse answers. Tell you what. I’m happy to yeet some unrelated questions at it if you’d like and let’s see what it does. My setup isn’t bog standard - what’ll likely happen is it’ll say “this question isn’t grounded in the facts given, so I’ll answer from my prior knowledge.” I designed my system to either answer it or fail loudly, because I don’t trust raw LLM infra. I’m not a fan(boy), I’m actually pretty hostile to current LLM models…so I cooked my own.

Want to give it a shot? I’ll ground it just to those facts, fair and square. Throw me a question and we’ll see what happens. Deal? I can screenshot it or post it, whatever you prefer.

The context window point is interesting and probably partially true. But working memory interference affects humans too. It’s just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM without a 2nd cup of coffee.

The training data point is actually really interesting, but I think it might be arguing in my favour without meaning to. If you’re acknowledging the model has absorbed the relevant knowledge, the objection becomes about how it was activated, not whether it can reason. But that’s just priming the pump.

You don’t sit an exam without reviewing the material first. Activating relevant knowledge before a task isn’t a workaround for reasoning, it’s a precondition for it.


Not sure how we’re quantifying intelligence here. Benchmarks?

Qwen3-4B 2507 Instruct (4B) outperforms GPT-4.1 nano (7B) on all stated benchmarks. It outperforms GPT-4.1 mini (~27B according to scuttlebutt) on mathematical and logical reasoning benchmarks, but loses (barely) on instruction-following and knowledge benchmarks. It outperforms GPT-4o (~200B) on a few specific domains (math, creative writing), but loses overall (because of course it would). The abliterated cooks of it are stronger yet in a few specific areas too.

https://huggingface.co/unsloth/Qwen3-4B-Instruct-2507-GGUF
https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF

So, in that instance, a 4B > 7B (globally), 27B (significantly) and 200-500B(?) situationally. I’m pretty sure there are other SLMs that achieve this too, now (IBM Granite series, Nanbiege, Nemotron etc)

It sort of wild to think that 2024 SOTA is ~ ‘strong’ 4-12B these days.

I think (believe) that we’re sort of getting to the point where the next step forward is going to be “densification” and/or architecture shift (maybe M$ can finally pull their finger out and release the promised 1.58 bit next step architectures).

ICBW / IANAE