It has become a daily occurrence: I sit down at my computer, start writing a new paragraph or coding a new feature, and before I've written more than a few lines, my fingers press cmd + t and glide over “c h a” + enter. By the next cursor blink, I’ve started a conversation with ChatGPT—one that usually begins like this:
“<some first draft paragraph, some half baked code> I want this __ and I want it todo __ and I am aware that there mightbe these issues ___ and I want sometheing that feels like ___ and ___”
And somehow, even with the fragments, misspells, and lack of thought in my query, I receive results that seem good.
So good, it reminds me of when I was a kid, and I tried to explain to my teacher in one-syllable words and exasperated rambles how another kid, Bobby, hurt me. Yet even with my sparse vocabulary and missing context, she still understood what happened and effortlessly offered suggestions for how to communicate my feelings.
Is this a good thing? Is it good that LLMs can finish my thoughts before I fully form them myself?
Well, when you’re a kid, and the “all-knowing” adult responds to you with something like, “Hey you could tell Bobby that you feel hurt when he calls you names”, you might carry a kind of stubbornness, or at least I did.
I’d push back when a solution didn’t feel right. I’d sit with my confusion for days, weeks, months even, rather than adopt the words of the wise as my own.
But as I’ve grown older, my patience for not knowing has declined. I look for every shortcut to drop my confusion as soon as it arises. I once narrowed my confusion by outsourcing my knowledge gathering to Google Search. I now eliminate my confusion by outsourcing not only the knowledge gathering but also the analysis and application of the information to LLMs. Why crawl through my own memory and conclude from first principles when I’m keystrokes away from increasingly intelligent answers?
The issue I’m noticing is that when I am not in the loop, questioning why I am creating and how, I lose my original intention or never form a clear intention at all. As a result, the output I produce is unfocused and lacking meaning.
Here’s an example: In my last monologue, the last sentence I published was:
I started keeping a journal next to my pillow. For the moments my thoughts were so vibrant, I felt the impulse to unheat my eye, just for a second—not to see the world, but to capture what I saw behind my eyes.
This was the exact query I wrote to ChatGPT for help:
I started keeping a journal next to my pillow. For the moments my thoughts were so vibrant, I felt the momentum to unheat my eye, just for a second, not to see the world, but to put into concrete what I saw.
i want something about how writing like puts things into a physical record
And this is what ChatGPT suggested:
“...not to see the world, but to give form to what I saw inside—to pin it down in ink before it disappeared.”
“...not to see the world, but to carve what I saw inside into something I could return to.”
“...not to see the world, but to capture the one unfolding inside me.””
Two thoughts transpired after I received this response:
I read these options, and I did not feel that any of the examples emphasized how I took my night time thoughts and put them them into the physical world.
I liked the way the word “capture” sounded.
I proceeded to take the word “capture” from the last option, reworded my original sentence, and called it a day. You could argue that the ending worked well, that I didn’t need to emphasize the “physical record”. But I had an intention of wanting to express something, and I did not know how to. I looked to AI for answers and did not receive one, and settled on a local max of the AI’s creation and my own initial thoughts.
If I had sat there, committed to expressing the words I didn’t have, and pushed myself to find the next missing thought, would I have written something more meaningful? Something that helped me and the reader recognize the urgency of getting our ideas out into the physical world?
Like child me, who didn’t just follow word for word how my teacher told me to resolve my issue with Bobby, but instead lived through my confusion and experiences, until I reached the day where I could navigate any conversation deftly.
I wonder if we allowed ourselves to hold onto the density of confusion we had as children—allowed ourselves the days, weeks, and months to wrestle with why things feel wrong and don’t look the way we want—we might come up with more step function improvements for people. I worry that in our excitement to give our thinking to AI, to get more answers and deliver results, we will lose the patience to understand what makes the human experience difficult and complex and bias toward managing machines to create for our implicit needs.
If you made it this far, thank you for reading my May inner monologue. If you have questions, opinions, experiences about any of these topics, I would love more than to discuss! That’s why I write after all :) You can find me @jjanezhang on X.
A special shoutout to my friend Janvi for inspiring me to be more aware of my thinking time and to my friends Matt and Wai for reading the first drafts of my writing :)
Other topics I thought about but didn’t write about:
Combating the loss of naiveté. When I don’t know anything about something, I find it really easy to make progress because I don’t know what the consequences of my actions will be. As I collect knowledge and experiences about past decisions, felt pain, felt joy, noticed lacking, I find myself more careful, and wanting to think things through given that I have past experience to draw from. But, I also challenge that in the global of life, I’m still naive and want to approach things more like the former.
Closeness in relationships ebbs and flows. There are times when you feel very connected to someone. There are times when you drift apart or there’s tension. That is okay and natural.
Read the original motivation for my this writing project here:
reminded me of a new yorker piece i read recently- “why even try if you have AI?”