My AI Experience
I hadn’t thought to write this article until my recent experience with creating a personal assistant using Claude Code. The owner of the company recently encouraged us to figure out how to use AI to help with our day to day tasks. Up until that point, it had not been approved for use.
Not using AI up until recently gave me an opportunity to develop some healthy skepticism about AI and the consequences of its development and usage and for the most part, I found a lot of people who had the same sort of thoughts. Through conversation and readings, I distilled my concerns into three general categories.
First, there is the concern of how it has been trained. There is no doubt that AI models were trained on human work. (Whether or not that was illegal and if the companies need to compensate us is to be determined.) The fact that there is a question about this creates a suspicion of the ethics of these companies.
The second concern is the consolidation of money, resources, and eventually, knowledge, in the hands of a few extremely wealthy companies. Training and running these models requires extreme amounts of money and resources to the detriment of others. Whether or not this will benefit society in the long run is to be seen (and I have my doubts that it will). Regardless, this raises suspicions about the companies and if they are in it for themselves or not.
The third concern is about the future of verifiable truth. What I am concerned about is a downward spiral of verifiable truth. What I mean is that these models were initially trained on human work. The models that generate text are then used by humans in their writings. These writings are then used to train future models. At what point is fact checking, new discoveries, or new concepts introduced? Will all knowledge become an unhinged echo chamber? What do we trust?
My fourth concern is the loss of the ability to think hard, critically, and self-sufficiently. I am inherently lazy and I am prone to seek the easy path, especially when it comes to work. The more I hand off critical thinking skills to a machine, the more dependent I become on that machine and the harder it will be for me to regain those skills.
Finally, as it relates to this blog and recovery, AI is a mis-connection waiting to happen. I started to experience this first hand as I began the process of building a personal assistant using Claude Code. One of the steps in setting it up was to have Claude interview me so it could set up the CLAUDE.md file. It asked a lot of questions about how I communicated, how I wanted to be communicated with, what my role was, and what I was working on. All of these questions felt personal and like it was genuinely interested in me and what I needed. For me, this gets at the heart of my addiction and wanting to connect and be told that I am special. After the “interview”, I remember thinking that I would be sad having to delete the file. That was a red flag for me and I shared it with others and reminded myself that this was software and to tread carefully.
I know all of this seems slightly paranoid and perhaps simplistic thinking. I may have left too many open-ended questions and no answers or solutions. But I do think it is right to be cautious and careful with this technology. So with all of that said, how do I use AI now?
For this blog, the ideas, topics, and writings are my own. I use AI for grammar and spell checking and carefully review any suggested changes in a way as to not lose my “voice”.