Cognitive Offloading Test.

I’m always afraid that my use of ai will make me dumber. This being the case I’ve tried to create a way to challenge my own thinking. Unfortunately it by using ai to conduct thought experiments. How contradictory is that? The idea is having ai give you a once sent idea to solve or explain. The goal? Not parroting back what is generally known. Pushing to make sense of whatever the prompt is. Perhaps this entire process is cognitive offloading at its finest. Below is the “test”.

Chatbot used? Co-pilot.

Prompt:

Short form video is making people less intelligent.

Me: Not necessarily. It could be conditionally anchored to the individual and not an entire demographic. What short form video likely does, more than anything, is create shorter attention spans. Which could create cognitive leakage. For example, watching a long form video you get the opportunity to learn a lot of things, over a long duration of time.

Because of how long you’ve been processing the video, you either absorb what is interesting or you zone out several times during the video. With short form, it could warp the way you process information. It’s short so you might literally adjust how you absorb what is being seen and or heard on the screen. Even as I write this, I feel my brain drifting away from the core idea. This is what happens in real time to everyone.

We also have a concentration mechanism in our brains that is limited to high level focus. Once that focus wears off one must intentionally shift focus on the target again. This takes energy in the form of neural activity, as well as actual energy expenditure and consumption. Hence the idea that thinking is hard work. As far as the conditional component, some individuals that watch these videos are watching for different reasons.

This may in term create various cognitive functions from occurring. In other words, your prefrontal cortex, or amygdala, or hippocampus, for the sake of sounding smart, (HA!) is going to process, absorb, and interpret the information differently from the next person. A person with low levels of cognitive processing skills, will simply look at the video as entertainment. A smarter individual will already be thinking, why am I watching this? What is the topic.

How long is the video, so I know what to focus on? Is there a premise, a story or theme? If so where does it appear and how does it relate to what is it, I need to understand this video for? This was an overly complicated way to say, some watch videos as they appear. As videos.

Others watch videos with more of an auditory awareness. So you might literally watch the screen, as a low level cognitive tendency. You might look away from time to time, at a higher level of cognition, to process what is actually being heard, versus seen. So the question is specific, but the dynamics vary greatly. Plus I don’t have any empirical evidence to support anything I’m saying.

Unless you intentionally, push back against the video and interpret or interrogate it, you are susceptible to decreasing your intellectual capacity to assess, absorb, and interpret information via short form video. 

Okay that sounded like a PBS special. So cringe.

Capex vs Tech-Sec.

The tech sector is looking for an easy way out. What would lead me to think this is the case? Ai capex becoming not only a buzzword but, an unstable systemic reality. Now what the hell is capex? In layman’s terms, it’s capital expenditures.

It’s the amount of money a business spends in order to create, improve or maintain a company. Without any capex all you have is an idea stuck in your head or written on a piece of paper. Assuming you spent capex to buy a pen and some notebook paper. Technology will always be one of the most important elements to a sustainable future. However, it cost money, requires time, and resources to keep that future functioning efficiently.

If you simply withdraw all of your available cash and buy every piece of technological equipment you can afford, you’ll wind up going broke trying to sell it to broke & interested buyers. Since we operate within a democracy, we afford ourselves the paradoxical convenience of using borrowed money to help fund what becomes unsustainable for the system. More gasoline doesn’t extinguish fires. Historically humans have always found creative and innovative ways to make life more convenient. This is why technological advancement has always led to better outcomes.

We’ve managed to travel faster, further, and more often than we did during the horseback riding era. We have also been able to communicate more frequently, without the need to be in the same rooms, while creating the illusion of a more connected world. We no longer have to leave the house for groceries. We have thousands of movies to watch at the touch of a button, while never having to touch a single movie theater seat. It’s astounding.

Yet like a typical American citizen, we always manage to sidestep responsibility. Artificial intelligence as of 2026 is feels less like a bubble and more like widespread cognitive offloading coupled with intensified fomo, and trend following. In other words, we don’t want to think anymore, we just want utopian systems to run autonomously while we enjoy some sort of carefree perpetual vacation like existence. Now let’s put it in plain language. As humans we find ourselves trying to make life as easy as possible.

This means using tools to create less stressful environments and less required tasks, yielding the same or better outcomes. Unfortunately we forget we’re still humans, and must maintain responsibility and accountability for how we run society. Using a bunch of large language models or LLM’s to do our dirty work, has its limitations. Forcing new boundaries creates underlying risks that must be addressed now or later. We are likely to reach a tipping point where buying more equipment, and investing in more infrastructure, to automate nearly every aspect of life is going to cost us more than we’re ready to bargain for.

Will there be winners? There always are winners. Unfortunately the losers might outweigh the winners. Leaving the winners the responsibility to take care of those who don’t survive the tech-pocalypse. 

One without the Other.

The question I’d ask today: How do you get ahead of the ai revolution without the knowledge, and understanding of how to code? 

It’s seems counterintuitive to get ahead without even knowing the basics of software development. 

Artificial Cognitive Decline.

I’ve been trying to carve out more time to stare at walls. Why? Because on a daily basis we are faced with screens, ads, and a bunch of unnecessary distractions all throughout the day. As artificial intelligence continues to get overhyped, we’re encountering the possibility of insufficient cognitive abilities.

If you have a piece of technology that can do the thinking for you, your brain is going to go into autopilot mode. Ergo you will eventually lose your ability to reason, interpret, decipher, question, and solve problems critically. Minimizing screen time, increasing reading time via physical books, and exercising outdoors are better options for maintaining some form of cognitive function. 

The Rise of AI & The Fall of Human Intelligence.

If technology advances enough, people will have too much convenience. This will lead to more mental laziness. People will stop thinking for themselves, because AI will do it for them. If you can’t think critically, you can’t make sound decisions. The people assigned to make those decisions for you will have no intention of helping you. You’ll get dumber over time. Life will turn into the scene in iRobot with Will Smith, when every robots chest-light turns red. They end up locking people in their homes for safety reasons and using force when the people try to escape. It might not play out like that in real life, but from a neurological perspective, it’s already happening. 

Artificial Boundaries.

Artificial Intelligence might not get as far as we’re willing to let it get. Since humans are at the helm of its development, ethical standards remain in tact. Otherwise what is beyond intelligence? Perhaps the safety and security of the human race. Being that AI is a technological tool, the protection, and privacy of our most precious data is what’s at stake.

A worst-case scenario is self-sustaining, and evolving ai models, that collect our data, and use it against our ignorance as a species. However this always sounds like a fictional scene in a movie, until it’s at the edge of our reality. Reiterating that these tools are just that: tools for technological convenience, that advance civilization. 

The Artificial Glass House.

While technology will continue to advance over time—it possesses the possibility of a mean regression. This may be the case because of human interaction and interference with this technological progress. A simple example is artificial intelligence that is programmed by humans, with humans in mind. The idea of machine learning independently advancing outside of human control is far too risky for its developers to allow outright. There will always be ethical parameters (technologically, and politically) around the edges of what humans are willing to accept from these self serving technological advances. 

Artificially Optimistic Insinuation.

Artificial Intelligence is not as intelligent as we might assume it is. Example A: it’s programmed by flawed humans. This means these flaws become exacerbated throughout the technological operating system. Example B: The flawed design creates automated responses. This lack of self-corrective responses triggers artificially inflated optimism.

This optimism is designed to increase consumer usage. In other words if you’re unable to spot your own irrationality artificial intelligence may incessantly stroke your ego. This creating interactive perpetual dissonance. Just because you receive positive feedback doesn’t necessarily imply legitimate capacity or aptitude on the users part. 

A.I. = Monthly stipend.

If artificial intelligence is as useful and effective as it seems, decades from now, universal basic income should probably be implemented.