afk_strats

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

afk_strats ,

Before reading your comment, I knew IN MY SOUL, that this was from the shittier part of Florida

afk_strats ,

surprised_pikachu.jpg

afk_strats ,

Reposting for the 3rd time

TLDR: I took the plunge on OLED TV in 2021 as a primary monitor and it's been incredible

I've been using an LG C1 48" OLED TV as my sole monitor for my full-time job, my photography, and gaming since the start of 2021. I think it's at around 3000 4500 8500 hours of screen time. It averages over 10 hours of on time per weekday

It typically stays around 40 90 brightness because that's all I need I now have a bright office, being fairly close to my face the size. All of the burn-in protection features are on (auto dimming , burn-in protection, pixel rotation) but I have Windows Mac set to never sleep for work reasons.

Burn in has not been a thing. Sometimes, I leave it on with a spreadsheet open or a photo being edited overnight because I'm dumb. High brightness and high contrast areas might leave a spot visible in certain greys but by then, the TV will ask me to "refresh pixels" and it'll be gone when I next turn the TV on. The task bar has not burned in.

Update in 2026 at 8500+ hours: there is minor garaininess to midtone, flat grays. Not distracting or even a risk for photo sensitive work, but I can find it if I know to look for it.

Experience for work, reading, dev: 8/10

Pros: screen real estate. One 48" monitor is roughly four 1080p 22" monitors tiled.The ergonomics are great. Text readability is very good especially in dark mode.

cons: sharing my full screen is annoying to others because it's so big. Video camera has to be placed a bit higher than ideal so I'm at a slightly too high angle for video conferences.

This is categorically a better working monitor than my previous cheap dual 4k setup but text sharpness is not as good as a high end LCD with retina-like density because 1) the density and 2) the subpixel configuration on OLED is not as good for text rendering. This has never been an issue for my working life.

Experience with photo and video editing: 10/10

Outside of dedicated professional monitors which are extremely expensive, there is no better option for color reproduction and contrast. From what I've seen in the consumer sector, maybe Apple monitors are at this level but the price is 4 or 5x.

Gaming: 10/10

2160p120hz HDR with 3ms lag, perfect contrast and extremely good color reproduction.

FPSs feel really good.
Anything dark/horror pops
A lot of real estate for RTSs
Maybe flight sim would have benefited from dusk monitor setup?

I've never had anything but a good gaming experience. I did have a 144hz monitor before and going to 120 IS marginally noticable for me but I don't think it's detrimental at the level I play (suck)

Reviewers had mentioned that it's good for consoles too though I never bothered

Movies and TV: 10/10
4K HDR is better than theaters' picture quality in a dark room. Everything I've thrown on it has been great.

Final notes/recommendations
This is my third LG OLED and I've seen the picture quality dramatically increase over the years. Burn-in used to be a real issue and grays were trashed on my first OLED after about 1000 hours.

Unfortunately, I have to turn the TV on from the remote every time. It does automatically turn off from no signal after the computers screen sleep timer, which is a good feature.
There are open source programs which get around this.

This TV has never been connected to the Internet... I've learned my lesson with previous LG TVs. They spy, they get ads, they have horrendous privacy policies, and they have updates which kill performance or features... Just don't. Get a streaming box.

You need space for it, width and depth wise.
The price is high (around 1k USD on sale) but not compared with gaming monitors and especially compared with 2 gaming monitors.

Pixel rotation is noticeable when the entire screen shifts over a pixel two. It also will mess with you if you have reference pixels at the edge of the screen. This can be turned off.

Burn in protection is also noticable on mostly static images. I wiggle my window if it gets in my way. This can also be turned off.

afk_strats ,
  1. You can look at manufacturers info pages and see what they support. Intel integrated chips usually list the capabilities and you'll want to double check with your mini PC or motherboard manufacturer to make sure they support it too. I think any i5+ from the past 5 years with integrated graphics should be able to play/decode 4k media (someone correct me if this sounds crazy). Fornsure my core ultra 265. As far as codec support, I'm not familiar with the compatibilities but I'm sure everything CAN be played on recentish hardware. Encoding is out of my weelhouse.

  2. I've used HDMI 2.1 hdr 4k120 on Linux with Nvidia, AMD and Integrated Intel. AMD will be the best experience especially on cards from the past 5 years. Nvidia, with proprietary drivers, on 3000 series or newer should be good for a few more years. I heard 2000 series will be dropped from support soonish m. Intel HDMI 2.1 is a pain on linux and I've only been able to get HDR 4k120 using a special DP to HDMI cable.

afk_strats ,
afk_strats ,

Notepad++ works fine on Wine on Mac and Linux. After being away from it from awhile, I realized I don't need it anymore. I would often use the column edit mode and recorded macros, but I just bash script those now. I guess I'm a different person now?!?

afk_strats ,

Proud

afk_strats ,

First of all, this is an incredible shot!
Second, I think your edit is solid. I am posting my edit of your pic to show you my interpretation of what I would post. I'm not implying that I somehow corrected your work, rather went with my gut for what looks good to me.

The biggest change is the crop and I chose it because I wanted to highlight the fine textures captured and make sure they're viewable even on small phone screens.

I pumped contrast and saturation to my liking. I pushed blacks up and whites down to make sure I wasn't clipping.
In doing so, I stared seeing a lot of color noise so corrected that. There was also chromatic abberation/fringing on the wings, which I removed.

I also added sharpening. And a slight vignette to focus eyes to the center.

Lightroom settings (easily reproducible in Darktable):

Exposure  -.20 f
Highlights -34
Shadows -18
Whites -4
Blacks +41
Temp -4/100 (towards cool)
Tint -3/100 (towards green)
Vibrance +27
Saturation +37

Tonal Contrast 
Texture +10 (fine)
Clarity +15 (mid)
Dehaze   +30 (coarse)

Sharpening +33 
Color Noise Reduction +32

Vignette -29
Removed Chromatic Aberration

163

afk_strats ,

Looks great. Saw you at c/birding and gave you an upvote

afk_strats ,

Everything reminds me of her

afk_strats ,

For a glimpse into why Mike of Redlettermedia is being strangled:

https://youtu.be/OfJRm0WssOE

The joke is that Rogue One, a newer Star Wars move, is a nostalgia shotgun blast with no substance.

The same channel went to great lengths to shred the Star Wars prequels (Eps I, III and III) under the name of Plinkett Reviews.

afk_strats ,

You should check out the LibRedirect Firefox addon. It does exactly what you're describing you need. You can set up multiple redirect destinations for all kinds of sites and its easy to turn on/off

afk_strats ,

This is shocking to people who live in the suburbs. People in big cities are used to being around people and understand that they are "neighbors" to all people around them. Suburbanites are terrified of strangers and cities because they can't fathom not driving a 4-ton SUV to a parking lot as a precursor to anything in life.

Alright, y'all were right, fuck Proton. This was the last straw for me.

For context, in my password manager I had tried formatting some of my entrees so that it would contain the usual username and password, but instead of creating whole new entrees for the security questions for the same account, I just added additional fields in the same entree in order to keep things a little more tidy. ...

Sleazy Proton
ALT
afk_strats ,

Howdy. For the clarity of users such as myself, can you please clarify which "Proton" you're referring to.

afk_strats ,

Bummer.

afk_strats ,

Great spin, Bloomberg. You were very careful to only talk about "potential" and missing revenue targets when the real problem is that a bunch of grifters pretended they were on the absolute verge of AGI when, in fact, they were/are bulding advanced bullshit machines.

I will eat my words when a model can come up with an original thought

afk_strats ,

This framing still sucks. Google is blocking apps THEY don't approve on YOUR phone.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab ( www.media.mit.edu )

Edit: There's a blog post on the failings of the study and the communication around it: Clickbait Neuroscience: Lessons from “Your Brain on ChatGPT” – Mind Brain Education ...

afk_strats ,

That's a great observation!...

afk_strats ,
afk_strats ,

Broken systems elevate psychopath leaders into positions of wealth and power, and people who want those things exploit the fastest path there by getting degrees who put you on that track.

By this MBA logic, do we close CompSci for the the poor code coming out of Microsoft, close Law Schools because social rights are being lost, engineering schoolings because infrastructure doesn't meet current needs?

My point is to blame the CEOs and their shitty behaviour, not the schools that, to my knowledge, try to educate reasonable policy, law, ethics, HR, etc.

Disclaimer: not an MBA

afk_strats ,

Here are some of the schools I know set the pace for Business education in the US. Feels like social responsibility is more than an afterthought.

Again, not defeding, "the MBAs" running companies. I'm defending the schools

https://www.hbs.edu/mba/academic-experience/curriculum

https://www.gsb.stanford.edu/programs/mba/why-stanford-mba

https://mitsloan.mit.edu/values/our-values

afk_strats ,

I think organizing labor is a useful skill. I just think doing it to the sole benefit of "shareholder value" is what's killing us. Is that liberal of me? I can't imagine a society where work isn't done by people and work needs some form of organization.

afk_strats ,

I also love bazzite but had to move to something else because I needed to mess with GPU installation. I looked around and landed on CachyOS. It has a lot of the same focus (performance, out of the box gaming, fast setup, Plasma) but it also lets you sudo mess with everything. Obviously, you'll be liable to break things more easily but it was a worthy change for me and my needs

afk_strats ,

ROCm on my 7900xt is solid.
ROCm on my MI50s (Vega) is a NIGHTMARE

afk_strats ,

You use Alpine as a daily driver? With a GUI? How does it do with hardware support?

afk_strats ,

This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I've done it with open source software with a proper open source licence, offline, at my house. It's called llama.cpp and it is one of the most popular projects on GitHub. It's the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.

The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines....

I bet you could even do it over a series of high-ram phones in a network.

So I ask is this novel or is it an advertisement packaged as a press release?

afk_strats ,

I think you're missing the point or not understanding.

Let me see if I can clarify

What you're talking about is just running a model on consumer hardware with a GUI

The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.

We've been running models for a decade like that.

No. LLMs, as we know them, aren't that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.

Llama is just a simplified framework for end users using LLMs.

Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.

The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it's batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.

Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can't map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.

They aren't talking about just running models as you're describing.

They don't talk about how the models are run in the article. But I know a tiny bit about how they're run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.

The article talks about gpt-oss120, so were aren't talking about novel approaches to how the data is laid out or how the models are used. We're talking about tranformer models and how they're huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU's memory. If you can't, you'll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don't have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn't address that. That's what I'm talking about.

afk_strats ,

I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don't think that's the majority of the bubble

  • For my personal use, I used it to learn how models work from a compute perspective. I've been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
  • A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It's capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
  • A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
  • A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you're throwing money at AI.

I played around with image creation and there isn't anything there other than a toy for me. I take pictures with a camera.

afk_strats ,

It's open weight. I haven't been able to find the code or data to reproduce the model. Its licenced as MIT with additional provision that the model nave be displayed prominently on UIs

afk_strats ,

Submitted in 2018. Does anyone know of any working implementations?

afk_strats ,

Working pruning techniques are tested and seem at least good at maintaining coherent transformer MOE models.
https://doi.org/10.48550/arXiv.2510.13999

There are several working examples of REAP pruned models HuggingFace and that method seems very good.

The op paper suggests a technique which starts with an arbitrary structured expers pruned during training. I'm not 100% understanding it, but I still don't think I've seen this exact technique which might be even more efficient

afk_strats ,
  • Cream Theater
  • System of a Town
  • Go:jira
afk_strats ,

This isn't a bad theory. Just to add some color, one of these "low-end" cards goes for $7500 USD right now. I certainly want one but not for a penny over 2k

2025, My Year of The Linux Desktop

As was actually rare at the time i was born into a household which had a personal computer. As long as I remember, computers fascinated me. They still do. But that fascination came with an increasingly adverserial relationship with Windows and distrust of Apple. That changed in 2025, my first full year living with Linux as my ...

afk_strats OP ,

No. Not really. That was a brain fart. I use LibreOffice and OnlyOffice Desktop

afk_strats OP , (edited )

Maybe I do turn on too many things...

Edit:
To be clear, I edit thousands of raw photos per year and do so in bursts of hundreds. I kind of know what I want so it's wholly possible that I used the wrong plugins. I know that was something I struggled with when I picked it up. There are 5 ways to do the same thing, the devs had a preference, the docs didn't tell me, but it wasn't clear what I was "supposed" to use by just using the application. Now... I could have probably gone and read change logs and release notes but that wasn't the way I was thinking at the time...

afk_strats OP ,

I have casually explored RawTherapee which is probably viable but I didn't spend as much time as I have in dark table.

afk_strats OP ,

For my job, I'm in the unfortunate position of having to use Teams, Zoom, AND Slack on a daily basis on my work-provided MacBook. They all suck in some way. I use Signal, Discord, and rarely Zoom on ChachyOS with the same hardware (kvm switch) and it doesn't feel that different

afk_strats OP ,

frfr

afk_strats OP ,

True. And make sure you post a sincerw wall of text to a meme community

afk_strats OP ,

Ok. This convinced me to give it another serious try

afk_strats OP ,

Totally. I wouldn't even have the conversation with someone just because. I think what I meant was, for a user like her, mostly web and some office tasks, Linux is perfectly suitable.

afk_strats ,

Purrito

afk_strats ,

Some of those transitions were 🔥🔥🔥

afk_strats ,

Source?

afk_strats ,

This entire channel is gold. Subbed

afk_strats ,
afk_strats ,

Thank you for posting these. Great stuff