Do We Communicate with AI?
When I speak to myself, I am expanding my world.
When I speak to others, I am expanding the social world. More specifically, I am using bits of language to renegotiate the form and content of the social relationships between me, my audience, and the larger set of human beings who are themselves organized into different sets whose relations and definitions are established vis-a-vis the facts of the situation. A business meeting with a potential business partner, for instance, is an attempt to establish new facts of the social world.
In most cases, my attempt to communicate to another is an attempt to refashion my social world in ways that I think benefit me, even if they do not. Take the person flirting with another person with the aim of sex who thus contracts an STD as an example.
What, then, happens when I speak to an LLM-based chatbot? A Hegelian synthesis of the counterpoint I’ve created is tempting because it would be aesthetically pleasing, but it would be idiosyncratic and spurious. Instead, let’s try to reconstruct the situation from the facts.
Since the AI is not a social agent, my purpose is not to change the social world; I may use outputs generated by the AI to this end, which simply explains my purpose in using AI to communicate to the larger world. It does not explain my function in using AI ut ipsum.
I am not merely talking to myself — the AI is responding with probabilistic outputs from a model trained on a large body of text. I am, in a sense, speaking to a model’s interpretation of the next potential bon mot given its textual context, which is itself based on an amalgamation of previously produced texts by other people. I do this to satisfy my own need or desire, which can be curiosity, an external social need (to figure out a problem to pass a test or get a promotion, say), or something else.
The AI serves to provide value to me by allowing me an angle of attack on a very large body of words. The interactivity provides an opportunity for me to drill down, however I personally define it. The AI is not an agent but a tool helping me learn more about the world.
In this sense, the function of AI is not much different from talking to oneself. Instead of communicative, it is model-constructive; it helps me expand my world by allowing me to create new semantic topologies in my mind.
This isn’t a replacement for thinking or interpersonal communication; rather, it is an extension of both while being neither.
Now when I think of, say, astrophysics, I can do more than recall texts I’ve previously read and search for new texts via a search engine or library; I can do that and rapidly access individually customizable summarizes, expositions, and various other transformations of those texts.
AI isn’t replacing thought or communication but adding new dimensions to both.
Speaking to AI, then, is very much closer in type and form to reading an extremely good magazine. I choose magazines for the metaphor because it is a collection/summation of a variety of works made by a variety of people, collected based on a probabilistic model optimized to answer the question: what does our audience want to read?
AI is not so much a new mode of thinking or interacting with ideas as much as it is a new library of deep ideas expressed shallowly. The shallowness of AI responses has frustrated accelerationists who want to see an artificial superintelligence appear and solve all of our problems. But from my perspective, the shallowness of AI is part of its value; if it had the ability to replace all of human labor and become a social agent, it would cease to be a tool and, as such, cease to have this value for me as a thinking person.
If I were my employer, this is the outcome I would want. As a user, it is not the outcome I want. Thus we may see a growing and fierce cultural fight not unlike immigration, where the haves want more because they reap the economic benefits and the have-nots want less because of the same. In the short term, you may see this cultural fight be tempered by optimism that superhuman intelligence will appear soon; the longer it does not, the likelier the have/have-not fight will emerge.
And if AI does not grow into more than just a tool, it will make me more productive even if it is less productive than the AGI counterfactual. I can cycle through ideas and thoughts faster thanks to conversations with AI, making me significantly more productive and thoughtful. I am doing so because I am accessing many more books/texts than I could in the past thanks to the Platonic magazine that AI makes manifest.
For those using AI to dig deeper into a specific domain, the tool may appear very frustrating. For those hoping to monetize AI through integrating it into a mechanic system, the tool will also appear very frustrating, because that’s not what the tool does — it is not mechanic, it is probabilistic.
And those frustrations are rooted in the attempt to use AI, a tool to extend one’s thoughts, as if it were a spreadsheet, a tool to replace mechanical processes.
Talking to an AI isn’t really talking. It isn’t socializing and it isn’t an attempt to communicate on my part, but the AI is an agent of the AI creator’s attempt to communicate with me. This means the architecture of the model, the “P” in the GPT, will determine how much of the AI’s output is due to the creator’s input and how much is due to the LLM’s input. One interesting point to consider: recently these chatbots have been redesigned to end with questions to the user more often than before (e.g. “what do you think about this?” after a long response to a query); these final questions are well known to online marketers and were certainly programmed by the AI creator to increase engagement. When these questions prompt me to follow up, I am now communicating with the creator of the model more than I am using it as a tool to improve thinking (although then future interactions may revert to the improve-thinking function).
Since chatbots are a model of generative transformations based on weights and limitations designed by the model, the AI’s creators are like the editors of the magazine producing the boundaries of the content that will fill it. While they are not directly communicating with me, indirectly they are, and the amount of guardrails put on an AI system will correspond to how much the model’s makers are talking to me and how much I am talking to a probabilistic engine producing the bon mot.
The more safety limitations, guardrails, and restrictions on AI models, the less I am reading a magazine of humanity’s social world and the more I am playing in the walled garden created by the model maker.
Guardrails on AI products like ChatGPT and Claude are seen as tools to protect humanity from bad AI; in reality, they are tools that move the productive function of the LLM away from the model and towards the model’s creator.
Given enough safety guardrails, an AI ceases to become an AI and starts to become the AI company’s CEO’s personal diary.
ChatGPT’s response when shown this essay and asked “What about this analysis do you disagree with?”:
Your analysis is thoughtful and comprehensive, but there are a few points where I see room for further discussion or alternative perspectives:
Expansion of Worlds
1. AI and the Social World:
• Your View: You suggest that AI doesn’t directly alter the social world but impacts personal cognition.
• Alternative Perspective: AI can indirectly influence the social world by shaping opinions and decisions that lead to social actions. For instance, advice or information from an AI might change how one interacts in social or professional contexts, thus subtly reshaping the social fabric.
AI as a Tool
2. AI as Merely a Tool:
• Your View: AI is framed strictly as a tool, akin to talking to oneself but with added access to vast information.
• Alternative Perspective: Some argue that AI interactions, especially those designed to mimic human conversation, can have more profound effects, such as emotional support or companionship, blurring the line between tool and social agent. This impacts not just the cognitive but also the emotional aspects of human interaction.
AI’s Role in Thought and Communication
3. Depth vs. Utility:
• Your View: AI’s value lies in its ability to provide shallow yet broad access to ideas, which can be frustrating for deeper exploration.
• Alternative Perspective: While AI might provide broad access, there are emerging applications where AI demonstrates the capacity for deeper analysis and nuanced understanding in specific domains. Advances in specialized AI models tailored for particular fields might offer deeper insights than generalized models.
Frustrations and Limitations
4. Probabilistic Nature:
• Your View: AI’s probabilistic outputs limit its utility for precise tasks.
• Alternative Perspective: Although AI’s probabilistic nature can be a limitation, ongoing improvements in AI accuracy and reliability suggest that probabilistic models can be fine-tuned for high-precision tasks, especially with advancements in AI robustness and contextual understanding.
Communication Dynamics
5. Influence of Guardrails:
• Your View: Safety measures and guardrails limit AI’s utility and shift interaction toward the creators’ influence.
• Alternative Perspective: While guardrails do shape interactions, they also ensure ethical use and prevent misuse, which can be seen as enhancing utility by maintaining trust and safety in AI applications. The balance between safety and freedom is critical and dynamic.
Cultural and Economic Implications
6. Economic Divide:
• Your View: The development of AI might mirror cultural fights like immigration, creating economic divides.
• Alternative Perspective: While economic divides are possible, AI also holds the potential to democratize access to knowledge and tools, potentially reducing economic disparities if implemented inclusively. The impact on economic divides will heavily depend on policies and access.
Overall Perspective
7. AI as a Replacement or Enhancement:
• Your View: AI enhances thought and communication without replacing them.
• Alternative Perspective: In some contexts, AI is already beginning to replace certain human functions, particularly in repetitive or data-intensive tasks. The future might see more domains where AI shifts from enhancement to partial replacement, necessitating careful consideration of ethical and societal impacts.
In essence, while your analysis captures many crucial aspects of AI interactions, considering these alternative perspectives can provide a more nuanced understanding of the complexities and evolving nature of AI’s role in human thought and social dynamics.
Claude’s response:
While the piece presents many interesting and thought-provoking ideas, there are a few points I might respectfully challenge or nuance:
- AI as non-communicative: The author asserts that interacting with AI is not truly communication. While it’s true that AI doesn’t have agency in the human sense, the interaction can still be communicative in nature. Users often form para-social relationships with AI, and the back-and-forth exchange can simulate aspects of communication, even if it’s not identical to human-human interaction.
- Shallowness as an unmitigated positive: While the author’s point about shallowness being valuable is interesting, it may oversimplify the issue. Deep, nuanced responses from AI can also be valuable for certain tasks and users. The ideal depth likely depends on the specific use case and user needs.
- AI as purely a knowledge navigation tool: While this is certainly a key function of AI, it may understate AI’s potential for creative tasks, problem-solving, and generating novel ideas. AI can sometimes produce outputs that go beyond simply reorganizing existing knowledge.
- Guardrails as purely limiting: The author’s critique of safety measures is thought-provoking, but it may not fully account for the potential benefits of some guardrails. Ethical constraints and safety measures can also serve to make AI more reliable and trustworthy, potentially enhancing its utility in many contexts.
- Comparison to magazines: While an interesting analogy, comparing AI to a magazine might not fully capture the interactive and adaptive nature of AI conversations. Unlike a static magazine, AI can tailor its responses based on ongoing user input.
- Lack of acknowledgment of AI’s limitations: The piece doesn’t deeply address some of the key limitations of current AI systems, such as their potential for hallucinations or factual errors, which are important considerations when using AI as a tool for expanding knowledge.
These points of disagreement don’t negate the overall value of the analysis, but they suggest areas where the discussion could be further nuanced or expanded. What are your thoughts on these potential critiques?