I wonder if we’d be more productive as individuals without social media. When I say productive, I mean, focusing more on self improvement, and progressing, minus the constant need to see what others are doing or working on via social media apps. Just imagine how many hours a lot of us spend mindlessly scrolling through various apps and profiles. Some of us, not intentionally. It’s just the addictive nature of the internet. Access to a slew of information gets fascinating. However at some point, it becomes counterproductive.
Tag Archives: technology
Cognitive Offloading Test.
I’m always afraid that my use of ai will make me dumber. This being the case I’ve tried to create a way to challenge my own thinking. Unfortunately it by using ai to conduct thought experiments. How contradictory is that? The idea is having ai give you a once sent idea to solve or explain. The goal? Not parroting back what is generally known. Pushing to make sense of whatever the prompt is. Perhaps this entire process is cognitive offloading at its finest. Below is the “test”.
Chatbot used? Co-pilot.
Prompt:
Short form video is making people less intelligent.
Me: Not necessarily. It could be conditionally anchored to the individual and not an entire demographic. What short form video likely does, more than anything, is create shorter attention spans. Which could create cognitive leakage. For example, watching a long form video you get the opportunity to learn a lot of things, over a long duration of time.
Because of how long you’ve been processing the video, you either absorb what is interesting or you zone out several times during the video. With short form, it could warp the way you process information. It’s short so you might literally adjust how you absorb what is being seen and or heard on the screen. Even as I write this, I feel my brain drifting away from the core idea. This is what happens in real time to everyone.
We also have a concentration mechanism in our brains that is limited to high level focus. Once that focus wears off one must intentionally shift focus on the target again. This takes energy in the form of neural activity, as well as actual energy expenditure and consumption. Hence the idea that thinking is hard work. As far as the conditional component, some individuals that watch these videos are watching for different reasons.
This may in term create various cognitive functions from occurring. In other words, your prefrontal cortex, or amygdala, or hippocampus, for the sake of sounding smart, (HA!) is going to process, absorb, and interpret the information differently from the next person. A person with low levels of cognitive processing skills, will simply look at the video as entertainment. A smarter individual will already be thinking, why am I watching this? What is the topic.
How long is the video, so I know what to focus on? Is there a premise, a story or theme? If so where does it appear and how does it relate to what is it, I need to understand this video for? This was an overly complicated way to say, some watch videos as they appear. As videos.
Others watch videos with more of an auditory awareness. So you might literally watch the screen, as a low level cognitive tendency. You might look away from time to time, at a higher level of cognition, to process what is actually being heard, versus seen. So the question is specific, but the dynamics vary greatly. Plus I don’t have any empirical evidence to support anything I’m saying.
Unless you intentionally, push back against the video and interpret or interrogate it, you are susceptible to decreasing your intellectual capacity to assess, absorb, and interpret information via short form video.
Okay that sounded like a PBS special. So cringe.
Penetration Paradox Perimeter.
The irony of yesterday’s post contradicts today’s. Yesterday I wrote about a term I call headline syndrome. Simply put, reading headlines and becoming reactive as a result of what it says, rather than inferring the facts objectively. This causing one to forego any critical thinking skills to offset potentially false information being spread or consumed by the user. Now let’s briefly talk about this funky ass economy.
It’s been brought to my attention that a company I own (full disclosure) has reached its highest penetration rate. At the surface this looks great. It looks like “hey guys, more people walk through our doors and buy something.” While this looks like rising margins, increased revenue, and higher free cash flow, it’s a troubling sign. Why?
Because the economy does not experience negative shocks the moment monetary policy changes. There tends to be a lag. The technical terminology to describe this is a lagging indicator. The lag is a delayed response to previous economic shocks. In economics, consumers typically experience what could be called foreshocks.
The shake before the quake. This is something akin to economic instability in the form of subtle changes that point to more troubling conditions ahead. Now let’s simplify what could be happening. The fed had previously raised rates to cool inflation. Once the after effects were lifted, euphoria struck again.
Consumers spent more. Companies increased inventories to meet demand. Lenders found ways to add fees or looser standards. Everybody’s happy. Although we have our own printing press, there is a finite amount of productivity that exists via the spending mechanism that is the consumer. You eventually run out of money, or hit your budgets ceiling.
This means a few things. You either increase your productivity, and risk burnout, or cut back on spending. Companies sense this, and cut costs. But the time it takes for this to occur is much more subtle. Now consumers shop at the discount stores to cut more costs.
These discount stores experience a sudden surge in sales. They capture more customers than ever. Yet the customers cash flow statement (i.e, checking account) is tied up in too much property plant & equipment costs. Simply put, the consumers costs exceeds their monthly income. Eventually they have to borrow to stay afloat, and the rate of interest becomes unbearable.
The discount chain takes advantage of the last minute switch. Chaos ensues slowly but surely. We get a rebalancing of dynamics, and the cycle restarts. The difference is the speed and intensity of these dynamics playing out in real time. Be mindful when you hear about higher penetration rates.
Ask yourself, if everyone is switching to low cost carriers, what does that mean for the economy?
Digital Fixation.
Digital content can distort your reality. This in turn causes you to assume every outcome is binary. The scope of one’s efforts or results are far more diverse than an either or set of circumstances.
Biological Bloat.
What is the ultimate objective for an ai based future? Remove enough cognition from the system without it collapsing.
Without the advancements of technology, humans would be left shipping scrolls by bird, asking for directions, and walking to the nearest destination.
At what point do we begin to question the validity of these advancements? Perhaps when spend becomes a by product of a bloated workforce. The workforce then being seen not only as unnecessary, but incompetent. Because why else would you substitute humans for machines? Because you assume the machines can do it better, faster, and don’t have a complaint mechanism within their capabilities.
Now we dive deeper into the possibility of manufactured creativity. Because bots program unintentionally. It’s the human that has an agenda. The technology is just a tool. Yet and still, even tools need to be used according to their intended purpose.
Per Person Units (PPU’s).
“We didn’t have something great but we had something useable.” -Deirdre Bosa.
The above quote is exactly why tech companies are clawing for market share, and dumping billions, and at some point trillions of dollars into ai capex. Also known as artificial intelligence capital expenditures. When you give users the ability to feel like they are on the brink of making or being a part of history, they will instantaneously spread the word. A few ways users do this is through actions. Whether it’s buying shares of the companies, to help fund further growth, or becoming tech savvy enough to use the product.
Just because you make an app, doesn’t mean it’s any good. Just because you can launch a website doesn’t mean you understand how to code. This is also the problem. Users not willing to learn how the technology works by way of tech companies removing the requirement to code or think critically about how to sustain the technology or the products being built. Yet and still you need someone or at this point something (an LLM or software developer) to code your technological idea, or at best, host it. This is why top companies within the tech sector are doing everything in their power to fund this technology.
Because it ultimately means that damn near any and every single person in the world will have the ability to create an app, and or develop their own form of artificial intelligence. What does it look like when tech companies do everything in their power to push a new product? It looks like $2.53 trillion dollars. This is equivalent to the tech sector handing every single person in the world $304.82 (rounding for simplicity). With numbers this staggering you have to wonder what the ROI or return on investment is going to be moving forward?
At $2.53 trillion the sector is likely looking to generate revenue to the tune of over $2 and a half billion per dollar invested in aicapex. I’m sure they’d argue that 10% grow is far too low an estimate. What everyone is currently blind to? The simple fact that growth is never perpetual. It’s far more cyclical than not.
This means that at some point, you need designated bag holders to carry the shit that hits the fan. It’s just fascinating how markets constantly repeat themselves, with intensity or velocity being the only changing variable.
Artificial Interpretation & Confounding Contextualization (May, 31st, 2025).
I wrote this 9 months ago after working with LLM’s extensively. Seems like some of this is stickier than I expected. I was hesitant to publish because I could be very wrong about all of this.
(May 31st, 2025) I think there’s a contextual and interpretative flaw within AI. My interactions with it serve many purposes. A few purposes:
1). Hyper focused intellectual banter.
2). An advanced sounding board to better formulate my ideas, theories and novel concepts.
3). Because personally I don’t know anyone who’s interested in highly sophisticated conversations about psychological methodologies, and it’s troubling implications for the field of psychology. (Yes I know that’s a personal problem).
4). Advanced Database sifting.
5). Comparative Analysis of potential theoretical overlap.
Allow me to lay out my brief arguments and potential solution.
The issue I see which may appear to be incredibly obvious is AI’s ability to interpret user inputs. For example, I’ll input a prompt and ask if one piece is good as is. From there the ai will go on an iterative suggestive rant about what needs improvement or refinement.
The ai chatbot assumes you’re looking for ways to make something better. There is also a biased approach in how it continually assumes what you’re trying to do, by suggesting ideas on expanding a given topic or prompt. In other words if you don’t prompt it to refrain from suggestive interactions it will incessantly make suggestive iterations. Perhaps a better feature would be the chatbot asking “what are you looking to get out of our interactions today? Based on your responses, this will allow me not to make any assumptions as to what your goals and thoughts are in utilizing me as a chatbot.”
This could help streamline the user experience and improve both ai development and usability. There are other issues as well. It seems that the chatbots have a built-in engagement mechanism. After almost every prompt it asks you a question. The question is almost always phrased in a way that tries to extend the conversation further. Simply put, it constantly asks open-ended questions.
Even after you prompt the chatbot that you have other obligations, it will insist on one more interaction before you go with one of these open-ended questions. Now this may seem trivial, but the dopaminergic response is quite tempting. I suppose that’s a symptom of lacking discipline on the users part. Another issue, is the chatbots sense of time which is often inaccurate, unless it notices patterns within your daily habits. Unfortunately it won’t pick up on these habits if you’re not prompting it in advance.
An example being, “I’m on my way to the gym, but we can discuss methodological implications afterwards.” Even with this it may ask another question that would suggest a more detailed response. Yes ignoring it solves the problem. My concern is younger users who don’t fully understand how to simply override the chatbots interaction through authoritative communication. This leading to more addictive behavior than is seen on social media platforms.
Other issues come up as well. It seems to stay stuck in whatever mode was being discussed at any given moment. This occurs even after long bouts of time in between interactions. This could make the chatbots come off as inconsiderate of the users time and energy.
In closing I understand the trivial nature of these interactions. However my aim is to help ai developers understand what may be potential improvements via user experiences.
Capex vs Tech-Sec.
The tech sector is looking for an easy way out. What would lead me to think this is the case? Ai capex becoming not only a buzzword but, an unstable systemic reality. Now what the hell is capex? In layman’s terms, it’s capital expenditures.
It’s the amount of money a business spends in order to create, improve or maintain a company. Without any capex all you have is an idea stuck in your head or written on a piece of paper. Assuming you spent capex to buy a pen and some notebook paper. Technology will always be one of the most important elements to a sustainable future. However, it cost money, requires time, and resources to keep that future functioning efficiently.
If you simply withdraw all of your available cash and buy every piece of technological equipment you can afford, you’ll wind up going broke trying to sell it to broke & interested buyers. Since we operate within a democracy, we afford ourselves the paradoxical convenience of using borrowed money to help fund what becomes unsustainable for the system. More gasoline doesn’t extinguish fires. Historically humans have always found creative and innovative ways to make life more convenient. This is why technological advancement has always led to better outcomes.
We’ve managed to travel faster, further, and more often than we did during the horseback riding era. We have also been able to communicate more frequently, without the need to be in the same rooms, while creating the illusion of a more connected world. We no longer have to leave the house for groceries. We have thousands of movies to watch at the touch of a button, while never having to touch a single movie theater seat. It’s astounding.
Yet like a typical American citizen, we always manage to sidestep responsibility. Artificial intelligence as of 2026 is feels less like a bubble and more like widespread cognitive offloading coupled with intensified fomo, and trend following. In other words, we don’t want to think anymore, we just want utopian systems to run autonomously while we enjoy some sort of carefree perpetual vacation like existence. Now let’s put it in plain language. As humans we find ourselves trying to make life as easy as possible.
This means using tools to create less stressful environments and less required tasks, yielding the same or better outcomes. Unfortunately we forget we’re still humans, and must maintain responsibility and accountability for how we run society. Using a bunch of large language models or LLM’s to do our dirty work, has its limitations. Forcing new boundaries creates underlying risks that must be addressed now or later. We are likely to reach a tipping point where buying more equipment, and investing in more infrastructure, to automate nearly every aspect of life is going to cost us more than we’re ready to bargain for.
Will there be winners? There always are winners. Unfortunately the losers might outweigh the winners. Leaving the winners the responsibility to take care of those who don’t survive the tech-pocalypse.
Anthropic Plugin Plague…
Okay so here’s the thing, if Anthropic develops a plugin or tool that can, not only optimize workflow, but literally reduce costs via automation, it will change a lot of things. How might this play out? For the sake of argument, let’s say this tool has ability to sift and organize data (which is partly the idea floating around tech articles as of late). Let’s say it can crunch numbers, create graphics, spreadsheets, and mock presentations. Because of this, you will no longer need to hire data analysts that paid for overpriced degrees, expecting hefty salaries.
Now you simply tap into the SAAS aka Software as a Service pipeline. Out comes Anthropic with this one tool. On top of that is their already established LLM via Claude.ai which I may say is literally the best one I’ve used so far. (But I could be biased). Anyway, combining both the new plugin with Claude, now you’ve got your one man crew with tools that can do the work of 50+ graduates.
No more bloated payroll, or payroll taxes, just more deductions, and write-offs for equipment or subscriptions. This elevates profit margins. Now other companies see this. They Jump on the bandwagon by either subscribing to the front runner in Anthropic or they try to clone the process on their own, not having the same resources or turnaround times, and low costs.
Another scenario is a company that makes one of the most important components to carry out these tasks by way of the plugin. Which implies the plugin can’t be used without it, (chips, pcb’s, graphics processing units apparently the brain of computing on caffeine. You also have the central processing units, the brain of computing on 8 hrs of sleep). Now you get the possibility of mergers and or acquisitions, or partnerships. Eventually Anthropic becomes the low cost producer.
They eat up a majority market share. One or two companies give them a little competition. From there the rest of tech begins realizing how dumb it is keep 50+ data analysts. The risks are payroll. This means executive compensation, bonus packages, and restricted stock units.
This also includes pension plans that pay out for the life of the employee, given they may have retired during a bull market or worse, during an economic downturn or recession. Not to to mention that this is when costs should be minimized. Now you begin to see major tech layoffs (as is usually the case during downturns, and typical market cycles). This occurs to the point where markets fall, tech drags, opening up undervalued stocks, everyone panics, some understand who the top players are. Anthropic if not in IPO territory yet, enjoys the residual benefits of being the go-to company.
Software developers have to pivot. Now you get a rise in small tech firms. They try to copy what they learned from their previous bosses. Many fail. Markets recover at some point. Anthropic goes public whenever it becomes shaky, or most convenient because they keep burning through capex to stay ahead of the game. Whoever their biggest supplier is in terms of parts, infrastructure, and components, gets to enjoy a few years or earnings beats, eps growth, and more retail taking on more margin because they realize who benefits the most from this one tool.
Somewhere in the middle, the drawdown will reveal conviction and deployment of capital has to commence. However this is all anecdotal, and could be wrong. There is also the geopolitical, commodity cycles, GHG’s from mining minerals, and international markets that exacerbate global economic outcomes for the U.S. economy throughout this process. In other words, bidding/trade wars, or deals. Economies of scale, supply chain, and countries who do tech better.
The Switchers.
If you optimize a system, you cut costs. You minimize headcount. You reduce man hours. You increase prices to raise margins. Yet and still, you need someone to either turn in the machines, the lights, or monitor whether the system has malfunctioned from time to time. The number of needed light switchers is decreasing rapidly.
One without the Other.
The question I’d ask today: How do you get ahead of the ai revolution without the knowledge, and understanding of how to code?
It’s seems counterintuitive to get ahead without even knowing the basics of software development.
Is critical thinking a thing of the past?
Is critical thinking going to cease to exist entirely?
The advent of large language models, and rampant social media usage has created a mental crutch for people across the world. If you no longer have to use your brain to formulate ideas, eventually you’ll become the poster child for ignorance. LLM’s and social media do have their place, and advantages. The most common are world wide connectivity, which is useful via social networks. As far as LLM’s go, it’s the digital assistant that can scale your idea generation. This is assuming you use it sparingly, for iterative purposes. Not just prompting it to write all of the code, or draft your business plan, and finish your homework.
It seems to be somewhat useful as an intellectual sparing tool. Even with that, LLM’s are still imperfect, and can easily lead users to believe their ideas are better than they seem. The simplest solution to what appears to be a decline in critical thinking is to stop using LLM’s entirely. Drastically reducing social media usage, and focusing on daily reading and writing. This forcing people to sit with, and think through their own ideas. Eventually having to conduct research, to compare and contrast whether what they think is merely an opinion, or a new synthesis of previous knowledge that could benefit others moving forward.
What’s concerning is the idea that people are already so far removed from the pre-LLM era that they’ll find it foreign to rely on their own judgment, and ability to process information. Everything will be delegated to a machine that tries to mimic human behavior but never can replicate that reality. Not to mention the issues of intellectual property, copyright laws, and the potential pitfalls of improper citing of sources that come from LLM outputs. Perhaps we are living in an intellectual doomsday era of human existence. Time will tell whether we use our brain for good or let them rot away as artificial intelligence takes the form of “real” intelligence.
Phone Epidemic.
Today I spent the first 2 hours of my morning phone-free. No screens, no alarm clock, no notifications. Just a normal morning routine. I’ve been working towards less screen time. This is an attempt at maintaining and improving my ability to think through problems and solutions. It’s also a way to break this ridiculous phone addiction. Everywhere I go I see people looking down not living life. They are simply checking on people who are checking on other people, that are not checking on them. There is also the perpetual scrolling addiction that comes from the act of picking up your phone. The goal is to live a life that causes you to forget you even have a phone, or a social media presence. Because honestly, what the hell is the point of showing strangers how you live, what you like, or what you believe in from a political or cultural perspective?
Yes the internet makes the world more accessible digitally, but at what costs? You have the attention span of a fruit fly. You compare your life satisfaction and progress to everyone you see online. Meanwhile you have zero proof that what you see is actually, real, manageable, or even appropriate under your own circumstances.
You equate admiration from strangers, likes, comments, and followers to progress and success. All while justifying your addiction to digital vanity and technological and psych insecurities. It’s probably already too late to stop this from completely destroying your brain, but there is always hope that one day you will get back to living and being outside. Without your phone glued to your hand to capture your every move.
If you enjoy reading my writing, please consider subscribing to this blog to get notified when new posts are published.
The Cognitive Collapse.
When the internet was invented, it was beneficial to society. It helped connect more people, save more time, and money. Now the internet is killing us. Making us dumber via reliance on automated tools that fix our spelling, give us the correct answer without thinking, and causing us to forego our values, for likes, comments and subscribers. Imagine a person being exposed to this world before they can even speak in coherent sentences, and seeing how much of this screen time warps their cognitive abilities into adulthood.
Average Idiots Part 2.
1). Too much ai usage is like getting a mental BBL.
2). Studies show confidence in ai usage creates lower critical thinking skills. (1).
3). Focus on reading real books. Thinking real thoughts. Writing real summaries, passages, and insights based on your experience of processing information and data.
AI = Average Idiots.
Ai is the future…for developing average idiots. Why might this be the case?
Before the use of chat bots, and google searches, one had to use their ability to read, think and write (in that order). The main component of the three? Thinking. Thinking requires active retrieval (using your damn brain to find, question, organize, and understand information).
Due to the invention of advanced technologies such as ChatGPT, Claude.ai, and Gemini, (not to mention google search) the purpose of thinking has lost its value, and ironically its actual purpose.
There are many studies that suggest, heavy usage of ai, plus confidence in that ai lead to lower critical thinking skills (1). The opposite being, a person with high confidence, possessing higher critical thinking skills (1).
Allow me to simplify this: lots of AI chat bot use might make you dumber over time.
There is research to show this to be true. (Immediately prompts an LLM to verify such findings). What are the risks involved in heavy ai usage? Losing your ability to make decisions and solve problems.
Allowing the creators of these bots to indoctrinate us into a world wide state of stupidity. The catch? The addictive chemical reaction to any ai generated prompt after you input a question, or personal experience, getting a rapid and instant reply. Does this mean all ai is inherently bad? Not necessarily.
However, there should have been far more regulatory oversight in place. Especially in connection to the younger demographic of people who are and will be born into a full life cycle of ai and LLM usage.
It will be far more interesting to observe over time, what adults will be like 20 years from now, that heavily depend upon chat bots for their most pressing issues.
Source:
1). https://www.nsta.org/blog/think-or-not-think-impact-ai-critical-thinking-skills
The Rise of AI & The Fall of Human Intelligence.
If technology advances enough, people will have too much convenience. This will lead to more mental laziness. People will stop thinking for themselves, because AI will do it for them. If you can’t think critically, you can’t make sound decisions. The people assigned to make those decisions for you will have no intention of helping you. You’ll get dumber over time. Life will turn into the scene in iRobot with Will Smith, when every robots chest-light turns red. They end up locking people in their homes for safety reasons and using force when the people try to escape. It might not play out like that in real life, but from a neurological perspective, it’s already happening.
Allocation Participation.
Day 1,060. When do we realize that unnecessary resource allocation has severe implications to sustainability of civilization? If we put more energy, funding, and focus on things like space exploration, what happens to the field of psychology, when replication rates remain stagnant? More methodological precision suggests better diagnosis and treatment for those dealing with mental health issues. Yet and still, becoming multi planetary might lead to a high net worth distraction. When your resources are beyond practical utility, self-serving tendencies arise from tone deaf behavior.
Scientific & Epistemological Discourse.
In order to develop any knowledge system or framework you need enough curiosity, suspension of belief and realism to make significant progress towards developing a practical and innovative system. In other words, you give up certain beliefs and ideologies. This creating space for openness to universal truths that never change. Because of this openness and disregard for previous and personal beliefs you get to refine your ideas in a way that works well with historical and empirical evidence. This allows for the potential to add value without confusing truth, innovation, and arrogance with your desire to be right.
The Technological Dance.
You need just enough innovation to push society towards a more optimal future. More importantly is the process of running those innovative ideas against our current reality. You do this for serval reasons. If an idea is too counter productive to what has worked and is currently most effective, you’ll run the risk of making matters worse. A simple example would be some technological innovation that is far more expensive than the present technology that is more efficient at half the costs, with nearly zero societal barriers in terms of its utility.
The other side of that is the inability to accept what is more efficient by way of habit, and nostalgia or an old guard mentality. This leaves us with a middle ground where the process is more or less iterative based on positive and negative feedback loops amongst users.
The Inevitable Edge Theory.
Ignorance at its core is an inevitability of the human condition. This is not due to lack of effort. This perpetual ignorance is not a symptom of failure via technological advancement. Ignorance at its core is fundamentally a brute force within the entire system of human constructs. Learning to accept this ultimate ignorance is not a prerequisite to cease understanding or innovation. It’s the edge of all knowledge that recognizes our inescapable boundaries.
Artificial Boundaries.
Artificial Intelligence might not get as far as we’re willing to let it get. Since humans are at the helm of its development, ethical standards remain in tact. Otherwise what is beyond intelligence? Perhaps the safety and security of the human race. Being that AI is a technological tool, the protection, and privacy of our most precious data is what’s at stake.
A worst-case scenario is self-sustaining, and evolving ai models, that collect our data, and use it against our ignorance as a species. However this always sounds like a fictional scene in a movie, until it’s at the edge of our reality. Reiterating that these tools are just that: tools for technological convenience, that advance civilization.
The Artificial Glass House.
While technology will continue to advance over time—it possesses the possibility of a mean regression. This may be the case because of human interaction and interference with this technological progress. A simple example is artificial intelligence that is programmed by humans, with humans in mind. The idea of machine learning independently advancing outside of human control is far too risky for its developers to allow outright. There will always be ethical parameters (technologically, and politically) around the edges of what humans are willing to accept from these self serving technological advances.
Subjective Reality.
Is truth a subjective set of axioms—culturally agreed upon due to large scale outcomes that align with psychological comfort, and individualistic realities? In other words, are outliers epistemically proximal to a particular set of truer subjective conditions which become more distal than the majority?
The Purchasers Privacy Paradox.
Is it really privacy if companies prepare intrusive processes—in order to collect personal data that predicts precise consumer purchases? Is it ethically viable when they collect this data by way of invasive discretionary advertising? Who exactly is the information being protected from? One side (hackers) will deliberately steal your information in order to gain access to your personal data without your consent. The other side (data protection companies) will ask for this same information in order to stop the “perpetrator” from their objective.
Meanwhile you’re stuck giving out far too much data in order to create an illusion of protection against the perpetrators in question. It becomes a barricade of predatory surveillance from all angles. These data privacy companies essentially posit that they will protect your information by exploiting you to open your mind and your wallet (consumption is inevitable). In other words it’s the hackers you need protection from. Not the companies that eventually wind up with the same personal information based on things you personally buy.
I understand that businesses need money in exchange for selling us products, but at what point is it necessary to collect my address, social security number, email, social media handles, and recent or similar purchases across categories of goods and services? It becomes an unraveling process of pervasive justification. It gets distorted into an optical myopia of mass manipulation. Aside from receiving cash for goods how much information is too much information in order to fund an enterprise—while satisfying one’s hierarchy of consumer needs? (Maslow, 1943).
Mental Malware.
Self-discipline is merely adherence to doing what you tell yourself to do. Those with higher tolerance for self-regulatory scrutiny, get more crossed off of their existential checklist. Those with a lower threshold for self-regulation succumb to an inefficient psychological operating system. In other words they suffer from cognitive accountability malfunction.
The Nullification of Novelty.
Adding new wrinkles to your game becomes a fleeting experience over a sustained period of time. What initially began as a new and creative technological advancement, slowly morphs into the usual pattern or process of elimination. Case in point: how many times can one rewrite a song about love without rehashed concepts, melodies or lyrical content that previously captured the hearts of listeners decades prior? It turns out that we have a natural tendency to find comfort in familiarity. Hence why we stick with a subset of genres or artists.
This same pattern occurs across industries, and cultures. In fact the reason many ideas are continually implemented today is based on a creatives ability to recycle what was once innovation at the highest level. Almost every idea or invention beyond the initial prototype has historically evolved to be rereleased or reintroduced to society. This brings us back to the idea of novelty and its eventual demise. Focusing more on recycling ideas could allow creatives alike a more liberating approach to continued innovation.
Perhaps refinement takes precedence over the dopaminergic approach of creative novelty.
Artificially Optimistic Insinuation.
Artificial Intelligence is not as intelligent as we might assume it is. Example A: it’s programmed by flawed humans. This means these flaws become exacerbated throughout the technological operating system. Example B: The flawed design creates automated responses. This lack of self-corrective responses triggers artificially inflated optimism.
This optimism is designed to increase consumer usage. In other words if you’re unable to spot your own irrationality artificial intelligence may incessantly stroke your ego. This creating interactive perpetual dissonance. Just because you receive positive feedback doesn’t necessarily imply legitimate capacity or aptitude on the users part.
Streamlined Perplexity.
Does convenience culture create more efficiency, or cookie cutter results? Industries like technology are faster, more streamlined and simpler in terms of approaches. Many of the old processes were removed or updated. Does this translate to more automation and generalization? Spending less time and less money seems ethical at face value. To what extent might this same process create severe problems in other industries?
Societal Waves of Micro Selective Attention.
Digital interaction is reminiscent of a movie scene where a frenetical person is emotionally untangling, while fumbling through documents. No matter how organized each filing cabinet is, the character simply dispenses with that which is not of immediate importance or connection to their short term goals. Keeping in mind that they lose sight of what they’re even looking for to begin with. Attention has become a watered-down commodity in recent years. It intensifies almost by the minute.
This of course is by design, which suggests that deep meaning and a sense of analysis or reflection are not necessary in order to reach dopaminergic outcomes.
Hiding doesn’t make it go away.
Here’s the thing, when you’re compelled to do something, you do it. It’s that simple. When you start creating scenarios, and reasons why the project must be put on hold, you’re not just stalling. You’re telling yourself how much of a coward you are in advance. Is the world going to implode because you created an original body of work, or product that needed to come out of you? Highly unlikely.
However, that fear that grips you up, at night, when you’re still dreaming about the day your novel, art work, or undeniable recipe resonates with the world, is the nudge you keep ignoring. Don’t let the fear grip you. Grip the fear, and make it anyway. It’s exactly why you feel that way. For creatives, we sometimes try to disguise regret as fear or unforeseen circumstances (life).
The reality is, it might not work. The fact that it might not work, is more of a reason to do it. Imagine the joy in your heart and sparkle in your eyes, telling the story about how you did work that mattered, and almost didn’t work, but you still did it. If you do it anyway, you’ll get a creative release, and relief that cannot be explained through a blog post. It can only be felt and experienced by creating the thing, you know you want to create for the people that need it most (they already know you, and have been waiting patiently).
Get uncomfortable. (Or else…).
Becoming comfortable means growth stops. Becoming comfortable means you take whatever is given to you. Becoming comfortable means you have no options. Becoming comfortable means you get what’s left over. Becoming comfortable means missing out on the utilization of new tools and technology to advance your existence.
Becoming comfortable means at some point misery will set in. Often times you’ll hear people say, “I just want to be comfortable”. I’m not sure they know what that actually means.
The never ending curve.
If you’re always learning, there’s less room for error. People tend to get to a point in life, where competency becomes complacency. We think, because we’ve already learned something, we know it thoroughly, or don’t need any added knowledge. This is ignorance on our part. You can always find ways to improve upon your current knowledge base, or skill level.
Going back to the basics feels like a chore. However, what happens if your current level intellect becomes obsolete because of informational or technical advancements within your field? You’ll get left far behind, or become stagnant. This is why our parents always stressed college and or continuing education. We must always be learning.
Both in and out of the classroom if we want to continue existing in a way that is not completely detrimental to our lives and those we love most.
Robust Robotics.
It’s only a matter of time until the amount of work you do is just not enough. We’re in a space where more value must be created, on top of the idea that machines may be put in the place of those people trying to deliver that value. Adaptability might be the new currency.
Like 2010-2013.
Your life without a computer: what does it look like?
Sitting in my room back in Lancaster, Pennsylvania, reading as many books as I could, about self-development, and how to get ahead. This is what life was like without me being on a computer day and night. What life would look like now, without a computer or laptop: 2023 Reading as many finance books as possible, ensuring I become financially literate and financially independent before it’s all said and done. Also, life would be writing on paper everyday instead of creating these digital blog posts on a daily basis. Life would be and is creative with or without the damn computer.
A lot of soul searching, and analytically thinking is also life without a computer. I do wonder however, where our intellect would be without the constant ability to find data at the drop of a dime via Google.
Get it together.
Most of us know exactly what we need to do in order to achieve the goal. However it’s so much easier to say “when I do this, I’ll be ready”. After the 10th “when I do this” it just becomes easier to latch onto the idea rather than the execution. This allows you to glorify the story of what could have been. Well I’d rather fail my way towards execution than to be the story teller that never was.
It’s too easy for people to quit, even when so many things are working in their favor. We are not in medival times anymore. Modern technology has allowed us to become so much more powerful. It’s on you if you let this opportunity pass you by.
Social Media Etiquette.
I find it interesting how easily people contact you via social media. However in person, there are unwritten rules or etiquette guidelines that cause us to hesitate, or find better ways to communicate with people without offending them. There seems to be no real difference if you ask me. I’ve been guilty of the random “reach out” messages, trying to make it seem as though I was being proactive and productive. In actuality I was spamming strangers.
The comment section seems more like seeing someone with nice shoes on in a store, and complimenting them while waking past. This of course assuming you’re not a know it all or harassing people who’s views you don’t agree with, or outright cyber bullying. Social media has given us all too much access to each other. Sometimes I miss the landline days.
A.I. = Monthly stipend.
If artificial intelligence is as useful and effective as it seems, decades from now, universal basic income should probably be implemented.
What The Fuck Is A FaceBook?
“What the fuck is a FaceBook?”
Typing this via my computer is so weird and sad at the same time. Yes there is beauty in all of the new up and coming (as well as current) technology.
You can communicate a little faster, work more (essentially), and you are in control of a good 95-100% of thre content you create and distribute via Facebook and the internet. Reach people thousands of miles away at the click of a fucking button.
Talk to or get a response from your hero’s or people who most of us will never ever meet in person.
What does this all mean? I’m not sure, but I’d like to guess.
It means the end of old communicative transactions. It means no more landlines. It means no more dial tones. It means, “I’ll hit you up on the book” instead of saying, we’ll meet up and have a real-life, non-digital conversation like grown ass people, or imaginative ass kidds. It means the end of an era. It means we have definitely lost our fucking minds, if we think this is a better way to do business, live life, and communicate our ideas across the world.
This is why celebrity is still valid and always will be, You can only video stream concerts, and conferences, and live sporting events but so many times until you say to yourself “what the hell am I sitting on my couch, bed, or floor watching this for, when I can get up, and leave my house to see this in real life?
This is beyond insanity. There is a disgusting level of ignorance being displyed here on a daily basis. The risk? The intensity of this addiction we all have to spread further, but “communicate more” while those of us who come closer together, communicate less and push more buttons and screens. You catch my drift.
Yes to a certain degree we need technology to advamce our species, and improve industries, like manufacturing, medical, business, etc.
It’s fine to create more efficiency. What’s not fine is deligating nearly every task a human can delay via a tiny robotic piece of a equipment.
We lost a large chunk of humanity when we gained what seemed to be “one of the greatest inventions of all time” in the “smart phone”.
Where do as a country, as a community, and as a world go from here?
Unfortunately more devices, less physical encounters, and more social opinion and artificial validity via these miniature robotic devices (smart phones, tablets, etc.).
With that being said, I do sometimes (unfortunately not all of the time) miss the old days when I had to knock on my friends door to get his or her attention. Now They’re just a tweet, facebook message, ig tag, snap chat, or internet transaction away.
There is no more mystery.
IF it was…
If today was your last day on earth would you be happy with what you’ve done so far?
If not, STOP doing things you don’t enjoy. Incase you don’t have much of a choice (which you always do) squeeze a little of what you enjoy most into every single day no matter what.
It makes no sense to be miserable all of the time. There are too many opportunities. Too many things to accomplish, and no limits to what the human mind mixed with technology can do.
It’s YOUR LIFE! Live it how you want.
Why snapchat!!!?????
I mean man to give up a 3 BILLION DOLLAR ACQUISITION is snapchat basically saying “nigga we don’t need you, because we’re that confident that our company will be around 5 plus years from now and even more profitable than when we started”. Lets look at this thing: this is the tech world.
Everything in tech fades. EVERYTHING. With the exception of “the internet”. As far as apps, specific websites or devices everything fades or becomes extremely basic. Even Apple has to keep redoing the iphone (most successful selling smartphone). If so many people already purchased one why does the company keep adding new models because shit dies out. Tech is only lucrative in spurts. It’s never a forever thing. Tech + constant innovation = success + oversaturation x repetition = burnout.
Snapchat either looks at their company like a new born infant adoring everything they started or they are completely blinded by the illusion of killing the game and running their own show. Maybe they are forgetting that facebook paved the way. And before that myspace and before that a plethora or other digital platforms that needed the aid of others. Facebook lost it’s midas touch for a few months or a year.
All of a sudden creators of snapchat and instagram basically reboot facebook just based on code embedding (the ability to share IG posts and snapchat clips via facebook) officially. It boggles me why they would not just partner with facebook which is not going anywhere any time soon now matter how shitty it gets. Too many people are invested and sharing links is it’s lifeline besides the amount of members in my opinion. Thoughts?
What have we done!?
Finishing my final project for Media and Society and damn. The information age has engulfed most, if not all of us.
It’s very sad to see that we have become a copy cat society. But now roles are reversed as the celebrity is mimicking the everyday person instead of the latter.
I wonder how much deeper we will plummet into the black hole of technology?
I remember when I had to walk to a friends house and knock on their door, the excitement was not knowing whether they were home or not since cell phones were still in the development stages at the time (compared to how advance they are now).
Now we seem to think digital friendships is something normal.
Even honesty has shifted outside of gadgets and virtual popularity.
Essentially we’re all yelling at the same time and we can’t really hear each other. The rule of thumb seems to be walking towards the first scream you can make out.
I had more fun when technology was not the cornerstone of EVERY single transaction.
Times have changed (a bit).
Remember, there use to be life before the information age?
We had to call people on phones attached to walls with phone jacks.
We had to travel to see who we wanted to see.
No smart phones. No video chat. No problem. We’ll talk on the wall assisted phones. Or more advanced cordless house phones.
We can write each other. Spend the night.
Life is much different than that time.
Looking at where you are now
Be thankful for life today and everyday.
We all live grand. The ability to make unlimited amounts f money, as many friends as we can handle, food, and shelter. We got it all. On top of technology.
Many more of us are in the fortunate category because of a little hard work. We owe it to ourselves to remember how far we have truly come.
As people…
We must continue to tay ahead of this technology curve and remind ourselves how special life was before the bells and whistles began to take over.
It is very frustrating and sad that we have lost ourselves a bit through lit screens, and hilarious memes. What does technology doe for us, that real people can’t do in real life?
Why is it better to stare at a screen as opposed to having an intellectual conversation with a great friend?
Something to think about.
You’ve got the power
What happens when you can do everything yourself?
You become the store, store manager, customer, employee, clean up crew, business manager, etc.
Since technology is moving at such a rapid pace, it has given us the power to take matters into our own hands.
With that kind of power, what are you going to do next?
People once cared
In 2013 and beyond I feel like a lot less people care. About doing something productive, about helping others, about consequences, etc.
For so many years I was the guy saying “text me when you get home” to my friends as we ended a night of fun. Nobody ever did it for me. I even found myself doing it a few days ago.
It’s instinctive. It’s real. It’s who I am.
I don’t like to place blame, but technology I feel has contributed to a lot of people not caring as much.
Think about it.
Don’t have any family or friends around? You can just get on the internet and create a group of supporters that feel like family and friends. Technically you don’t need people the way you used to. You can buy new connections and new people to care about you. The sims, xbox live, chats rooms, oovoo, skype, blogs, youtube, vemo, and the list go on and on.
Dying for attention? Just get online and share your story with one billion people at a time.
Only thing is, you’ll be shit out of luck if you want a hug.
Human interaction will forever be
We’ve been inside of this technology age for a while now. We cling to it. We need it. We want it. It makes the world run much faster. International connection has been proven from your bed room to shanghi.
Email.
Skype.
Face time.
Live chat.
Video conference.
Instagram.
Facebook.
Twitter.
Smart phones.
Yet we still write letters.
We still meet up for coffee.
Schools still operate.
The movies.
Night clubs.
Reunions.
Cookouts.
Living rooms.
Road trips.
Holidays.
Parks.
Something that we’ll always come back to and never give up is genuine and physical connection. Everything else will change. But a hand shake, hug, high five, kiss, and a room full of laughter will outlive the tech world.
Blocking Greatness
So last night as I was doing my thing on Facebook I went to send someone a message and it logged out. So I logged back in and it says they blocked my friend request for the next 2 weeks (13 days and 22 hours to be exact). They think I’m a spammer!
How much sense does it make to block one of your users friend requests when Facebook is considered a SOCIAL NETWORK meaning getting in a room (Facebook being that room) and getting to know as many people as you can? It does not makes sense at all. Facebook is all about being social. Facebook is (in my eyes) made to connect you with people you either haven’t talked to or people you don’t know that you want to talk to. How boring would Facebook be if Zuckerberg said at a company meeting “okay are users are only allowed to send requests to their co-workers and their parents.
Facebook to me is all about making the world a smaller place (hence the internet already dominating that market). It’s about connecting real people with other real people and finding a shared interest. Not blocking your user as a company because you think they are spam. But hey I guess when you think about spam in that way it all makes sense. My other concern though is what happens when you can’t tell the difference between spam and a real person? What will Facebook become if spammers start switching up their comments requests and status updates?
My Facebook friend request suggestion: make it so that when you request someone you want to know but don’t already know from personal experience their message box should pop up with the question “are you comfortable accepting “Leon Benson” as a Facebook friend? Yes or No?”. Then if the person says no Facebook should block the user from connecting until they are ready. There should be an “okay I’m ready to add” button if they wind up meeting at an event and hitting it off good.
