I wrote this 9 months ago after working with LLM’s extensively. Seems like some of this is stickier than I expected. I was hesitant to publish because I could be very wrong about all of this.
(May 31st, 2025) I think there’s a contextual and interpretative flaw within AI. My interactions with it serve many purposes. A few purposes:
1). Hyper focused intellectual banter.
2). An advanced sounding board to better formulate my ideas, theories and novel concepts.
3). Because personally I don’t know anyone who’s interested in highly sophisticated conversations about psychological methodologies, and it’s troubling implications for the field of psychology. (Yes I know that’s a personal problem).
4). Advanced Database sifting.
5). Comparative Analysis of potential theoretical overlap.
Allow me to lay out my brief arguments and potential solution.
The issue I see which may appear to be incredibly obvious is AI’s ability to interpret user inputs. For example, I’ll input a prompt and ask if one piece is good as is. From there the ai will go on an iterative suggestive rant about what needs improvement or refinement.
The ai chatbot assumes you’re looking for ways to make something better. There is also a biased approach in how it continually assumes what you’re trying to do, by suggesting ideas on expanding a given topic or prompt. In other words if you don’t prompt it to refrain from suggestive interactions it will incessantly make suggestive iterations. Perhaps a better feature would be the chatbot asking “what are you looking to get out of our interactions today? Based on your responses, this will allow me not to make any assumptions as to what your goals and thoughts are in utilizing me as a chatbot.”
This could help streamline the user experience and improve both ai development and usability. There are other issues as well. It seems that the chatbots have a built-in engagement mechanism. After almost every prompt it asks you a question. The question is almost always phrased in a way that tries to extend the conversation further. Simply put, it constantly asks open-ended questions.
Even after you prompt the chatbot that you have other obligations, it will insist on one more interaction before you go with one of these open-ended questions. Now this may seem trivial, but the dopaminergic response is quite tempting. I suppose that’s a symptom of lacking discipline on the users part. Another issue, is the chatbots sense of time which is often inaccurate, unless it notices patterns within your daily habits. Unfortunately it won’t pick up on these habits if you’re not prompting it in advance.
An example being, “I’m on my way to the gym, but we can discuss methodological implications afterwards.” Even with this it may ask another question that would suggest a more detailed response. Yes ignoring it solves the problem. My concern is younger users who don’t fully understand how to simply override the chatbots interaction through authoritative communication. This leading to more addictive behavior than is seen on social media platforms.
Other issues come up as well. It seems to stay stuck in whatever mode was being discussed at any given moment. This occurs even after long bouts of time in between interactions. This could make the chatbots come off as inconsiderate of the users time and energy.
In closing I understand the trivial nature of these interactions. However my aim is to help ai developers understand what may be potential improvements via user experiences.
