Burn-in is the one big worry with OLED monitors. But evidence that it shouldn't be a dealer breaker for gamers is approaching critical mass thanks to another long-term assessment released today. ...
TLDR: I took the plunge on OLED TV in 2021 as a primary monitor and it's been incredible
I've been using an LG C1 48" OLED TV as my sole monitor for my full-time job, my photography, and gaming since the start of 2021. I think it's at around 30004500 8500 hours of screen time. It averages over 10 hours of on time per weekday
It typically stays around 40 90 brightness because that's all I need I now have a bright office, being fairly close to my face the size. All of the burn-in protection features are on (auto dimming , burn-in protection, pixel rotation) but I have Windows Mac set to never sleep for work reasons.
Burn in has not been a thing. Sometimes, I leave it on with a spreadsheet open or a photo being edited overnight because I'm dumb. High brightness and high contrast areas might leave a spot visible in certain greys but by then, the TV will ask me to "refresh pixels" and it'll be gone when I next turn the TV on. The task bar has not burned in.
Update in 2026 at 8500+ hours: there is minor garaininess to midtone, flat grays. Not distracting or even a risk for photo sensitive work, but I can find it if I know to look for it.
Experience for work, reading, dev: 8/10
Pros: screen real estate. One 48" monitor is roughly four 1080p 22" monitors tiled.The ergonomics are great. Text readability is very good especially in dark mode.
cons: sharing my full screen is annoying to others because it's so big. Video camera has to be placed a bit higher than ideal so I'm at a slightly too high angle for video conferences.
This is categorically a better working monitor than my previous cheap dual 4k setup but text sharpness is not as good as a high end LCD with retina-like density because 1) the density and 2) the subpixel configuration on OLED is not as good for text rendering. This has never been an issue for my working life.
Experience with photo and video editing: 10/10
Outside of dedicated professional monitors which are extremely expensive, there is no better option for color reproduction and contrast. From what I've seen in the consumer sector, maybe Apple monitors are at this level but the price is 4 or 5x.
Gaming: 10/10
2160p120hz HDR with 3ms lag, perfect contrast and extremely good color reproduction.
FPSs feel really good.
Anything dark/horror pops
A lot of real estate for RTSs
Maybe flight sim would have benefited from dusk monitor setup?
I've never had anything but a good gaming experience. I did have a 144hz monitor before and going to 120 IS marginally noticable for me but I don't think it's detrimental at the level I play (suck)
Reviewers had mentioned that it's good for consoles too though I never bothered
Movies and TV: 10/10
4K HDR is better than theaters' picture quality in a dark room. Everything I've thrown on it has been great.
Final notes/recommendations
This is my third LG OLED and I've seen the picture quality dramatically increase over the years. Burn-in used to be a real issue and grays were trashed on my first OLED after about 1000 hours.
Unfortunately, I have to turn the TV on from the remote every time. It does automatically turn off from no signal after the computers screen sleep timer, which is a good feature.
There are open source programs which get around this.
This TV has never been connected to the Internet... I've learned my lesson with previous LG TVs. They spy, they get ads, they have horrendous privacy policies, and they have updates which kill performance or features... Just don't. Get a streaming box.
You need space for it, width and depth wise.
The price is high (around 1k USD on sale) but not compared with gaming monitors and especially compared with 2 gaming monitors.
Pixel rotation is noticeable when the entire screen shifts over a pixel two. It also will mess with you if you have reference pixels at the edge of the screen. This can be turned off.
Burn in protection is also noticable on mostly static images. I wiggle my window if it gets in my way. This can also be turned off.
I have a TV which says it supports 4K 144Hz and right now I run an old laptop as a media server/desktop to it, which can only handle 1080p. I wish to switch to some NUC/mini-pc that run a linux desktop in 4K and run media flawlessly on it. ...
You can look at manufacturers info pages and see what they support. Intel integrated chips usually list the capabilities and you'll want to double check with your mini PC or motherboard manufacturer to make sure they support it too. I think any i5+ from the past 5 years with integrated graphics should be able to play/decode 4k media (someone correct me if this sounds crazy). Fornsure my core ultra 265. As far as codec support, I'm not familiar with the compatibilities but I'm sure everything CAN be played on recentish hardware. Encoding is out of my weelhouse.
I've used HDMI 2.1 hdr 4k120 on Linux with Nvidia, AMD and Integrated Intel. AMD will be the best experience especially on cards from the past 5 years. Nvidia, with proprietary drivers, on 3000 series or newer should be good for a few more years. I heard 2000 series will be dropped from support soonish m. Intel HDMI 2.1 is a pain on linux and I've only been able to get HDR 4k120 using a special DP to HDMI cable.
Notepad++ works fine on Wine on Mac and Linux. After being away from it from awhile, I realized I don't need it anymore. I would often use the column edit mode and recorded macros, but I just bash script those now. I guess I'm a different person now?!?
First of all, this is an incredible shot!
Second, I think your edit is solid. I am posting my edit of your pic to show you my interpretation of what I would post. I'm not implying that I somehow corrected your work, rather went with my gut for what looks good to me.
The biggest change is the crop and I chose it because I wanted to highlight the fine textures captured and make sure they're viewable even on small phone screens.
I pumped contrast and saturation to my liking. I pushed blacks up and whites down to make sure I wasn't clipping.
In doing so, I stared seeing a lot of color noise so corrected that. There was also chromatic abberation/fringing on the wings, which I removed.
I also added sharpening. And a slight vignette to focus eyes to the center.
Lightroom settings (easily reproducible in Darktable):
You should check out the LibRedirect Firefox addon. It does exactly what you're describing you need. You can set up multiple redirect destinations for all kinds of sites and its easy to turn on/off
This is shocking to people who live in the suburbs. People in big cities are used to being around people and understand that they are "neighbors" to all people around them. Suburbanites are terrified of strangers and cities because they can't fathom not driving a 4-ton SUV to a parking lot as a precursor to anything in life.
For context, in my password manager I had tried formatting some of my entrees so that it would contain the usual username and password, but instead of creating whole new entrees for the security questions for the same account, I just added additional fields in the same entree in order to keep things a little more tidy. ...
Great spin, Bloomberg. You were very careful to only talk about "potential" and missing revenue targets when the real problem is that a bunch of grifters pretended they were on the absolute verge of AGI when, in fact, they were/are bulding advanced bullshit machines.
I will eat my words when a model can come up with an original thought
Edit: There's a blog post on the failings of the study and the communication around it: Clickbait Neuroscience: Lessons from “Your Brain on ChatGPT” – Mind Brain Education ...
Broken systems elevate psychopath leaders into positions of wealth and power, and people who want those things exploit the fastest path there by getting degrees who put you on that track.
By this MBA logic, do we close CompSci for the the poor code coming out of Microsoft, close Law Schools because social rights are being lost, engineering schoolings because infrastructure doesn't meet current needs?
My point is to blame the CEOs and their shitty behaviour, not the schools that, to my knowledge, try to educate reasonable policy, law, ethics, HR, etc.
I think organizing labor is a useful skill. I just think doing it to the sole benefit of "shareholder value" is what's killing us. Is that liberal of me? I can't imagine a society where work isn't done by people and work needs some form of organization.
The number of issues I've been having with Bazzite, due to its immutability and my lack of experience, has finally reached the point that I'm no longer willing to deal with. ...
I also love bazzite but had to move to something else because I needed to mess with GPU installation. I looked around and landed on CachyOS. It has a lot of the same focus (performance, out of the box gaming, fast setup, Plasma) but it also lets you sudo mess with everything. Obviously, you'll be liable to break things more easily but it was a worthy change for me and my needs
I'm curious if anyone has had much luck leveraging older AMD hardware to use ROCm, I have an 6700 xt that I've just begun inquiring about, and it seems it falls outside of official support. ...
This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I've done it with open source software with a proper open source licence, offline, at my house. It's called llama.cpp and it is one of the most popular projects on GitHub. It's the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.
The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines....
I bet you could even do it over a series of high-ram phones in a network.
So I ask is this novel or is it an advertisement packaged as a press release?
I think you're missing the point or not understanding.
Let me see if I can clarify
What you're talking about is just running a model on consumer hardware with a GUI
The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.
We've been running models for a decade like that.
No. LLMs, as we know them, aren't that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.
Llama is just a simplified framework for end users using LLMs.
Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it's batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can't map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.
They aren't talking about just running models as you're describing.
They don't talk about how the models are run in the article. But I know a tiny bit about how they're run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.
The article talks about gpt-oss120, so were aren't talking about novel approaches to how the data is laid out or how the models are used. We're talking about tranformer models and how they're huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU's memory. If you can't, you'll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don't have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn't address that. That's what I'm talking about.
I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don't think that's the majority of the bubble
For my personal use, I used it to learn how models work from a compute perspective. I've been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It's capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you're throwing money at AI.
I played around with image creation and there isn't anything there other than a toy for me. I take pictures with a camera.
It's open weight. I haven't been able to find the code or data to reproduce the model. Its licenced as MIT with additional provision that the model nave be displayed prominently on UIs
There are several working examples of REAP pruned models HuggingFace and that method seems very good.
The op paper suggests a technique which starts with an arbitrary structured expers pruned during training. I'm not 100% understanding it, but I still don't think I've seen this exact technique which might be even more efficient
This isn't a bad theory. Just to add some color, one of these "low-end" cards goes for $7500 USD right now. I certainly want one but not for a penny over 2k
As was actually rare at the time i was born into a household which had a personal computer. As long as I remember, computers fascinated me. They still do. But that fascination came with an increasingly adverserial relationship with Windows and distrust of Apple. That changed in 2025, my first full year living with Linux as my ...
Edit:
To be clear, I edit thousands of raw photos per year and do so in bursts of hundreds. I kind of know what I want so it's wholly possible that I used the wrong plugins. I know that was something I struggled with when I picked it up. There are 5 ways to do the same thing, the devs had a preference, the docs didn't tell me, but it wasn't clear what I was "supposed" to use by just using the application. Now... I could have probably gone and read change logs and release notes but that wasn't the way I was thinking at the time...
For my job, I'm in the unfortunate position of having to use Teams, Zoom, AND Slack on a daily basis on my work-provided MacBook. They all suck in some way. I use Signal, Discord, and rarely Zoom on ChachyOS with the same hardware (kvm switch) and it doesn't feel that different
Totally. I wouldn't even have the conversation with someone just because. I think what I meant was, for a user like her, mostly web and some office tasks, Linux is perfectly suitable.
Call now.
TrumpRx Denounced as Corrupt Scheme to Line Pockets of Big Pharma—and Don Jr. | Common Dreams ( www.commondreams.org )
After 3,000 hours and two years another OLED gaming monitor burn-in assessment finds only minor panel damage ( www.pcgamer.com )
Burn-in is the one big worry with OLED monitors. But evidence that it shouldn't be a dealer breaker for gamers is approaching critical mass thanks to another long-term assessment released today. ...
Linux 4K desktop/media server
I have a TV which says it supports 4K 144Hz and right now I run an old laptop as a media server/desktop to it, which can only handle 1080p. I wish to switch to some NUC/mini-pc that run a linux desktop in 4K and run media flawlessly on it. ...
Jungle
Notepad++ hijacked by state-sponsored hackers ( notepad-plus-plus.org )
What's an objectively terrible movie that you love anyway?
Looking for editing advice [OC]
Oly E-M1 @ 270mm, f/7.1, 1/400s, ISO-250. ...
Is it hot in here or what? 🥵
I may have gone a little far in some places.
Any open source browser extensions that add reminders or notes to certain websites?
HI all, ...
Modern life
Alright, y'all were right, fuck Proton. This was the last straw for me.
For context, in my password manager I had tried formatting some of my entrees so that it would contain the usual username and password, but instead of creating whole new entrees for the security questions for the same account, I just added additional fields in the same entree in order to keep things a little more tidy. ...
A Guide to the Circular Deals Underpinning the AI Boom | A web of interlinked investments raises the risk of cascading losses if AI falls short of its potential. ( www.bloomberg.com )
Android won't kill sideloading after all, but new verification rules will make it harder ( www.techspot.com )
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab ( www.media.mit.edu )
Edit: There's a blog post on the failings of the study and the communication around it: Clickbait Neuroscience: Lessons from “Your Brain on ChatGPT” – Mind Brain Education ...
Spong Berb Adventures #6
Majority of CEOs report zero payoff from AI splurge ( www.theregister.com )
Bazzite but mutable?
The number of issues I've been having with Bazzite, due to its immutability and my lack of experience, has finally reached the point that I'm no longer willing to deal with. ...
ROCm on older generation AMD gpu
I'm curious if anyone has had much luck leveraging older AMD hardware to use ROCm, I have an 6700 xt that I've just begun inquiring about, and it seems it falls outside of official support. ...
Guys, what's the best Linux distro to install on my PC?
cross-posted from: ...
Researchers figured out how to run a 120-billion parameter model across four regular desktop PCs ( actu.epfl.ch )
MiniMax M2.1 is open source: SOTA for real-world dev workflows and agentic systems ( huggingface.co )
The Lottery Ticket Hypothesis: finding sparse trainable NNs with 90% less params [2018] ( arxiv.org )
cross-posted from: https://lemmy.bestiver.se/post/844165 ...
Help is needed ( feddit.org )
NVIDIA is preparing to add native Linux support to GeForce NOW according to VideoCardz.com ( videocardz.com )
2025, My Year of The Linux Desktop
As was actually rare at the time i was born into a household which had a personal computer. As long as I remember, computers fascinated me. They still do. But that fascination came with an increasingly adverserial relationship with Windows and distrust of Apple. That changed in 2025, my first full year living with Linux as my ...
alhamdulillah he will be baked soon 🙏🙏
using a binder clip as a spring instead of 3d printing one ( youtu.be )
Very clever.
We're just friends!
Jingle Cats on VHS ( youtu.be )
I have been waiting with bated breath for the full upload of this video. It's more beautiful than I could imagine.
Nighttime exposure to light may raise cardiovascular risk by up to 50% ( news.harvard.edu )
Glimpses - my short photo essay. Quiet poetry of the everyday.
All the photos - https://mtk.bearblog.dev/glimpses/ ...