The other day my book club (the book was Why Fish Don’t Exist by Lulu Miller, highly recommend) descended into a discussion of AI. It was an interesting group. I was the only person there who works in tech. Most of the rest of the group was teachers, white collar professionals, and consultants. All medium user-level tech-savvy with a clear understanding of navigating the internet but no one was going to explain how a database or algorithm worked. So, basically, a pretty representative group of the average user for most of the applications I have worked on.
So what were some of the use cases that this sample was relying on AI for?
Search Replacement
The most common use case, by far, was using AI to search for information. From “what should I make for dinner” to “historical facts about ARPANET” it has become a combination of Google and Wikipedia rabbit holes. Many people in the group described getting lost in a conversation with AI and going to a lot of wild routes to unusual connections. This feels like the internet I grew up on, on steroids. And it’s impossible anymore to search without AI coming back with suggestions. Google, Microsoft, Amazon, most large companies have replaced their natural search patterns with something powered by an LLM. Because summarizing information is something AI is really good at.
Life and Social Coach
One member of the group mentioned using AI to come up with comebacks to a bad interaction with a neighbor. Until her daughter criticized her for the energy use. I let her know that according to Science Vs. that kind of usage has low environmental impact while videos have a significant cost. She didn’t realize you can make videos with AI. This resulted in a lot of discussion of using AI as a Life or Social coach. Honestly, that is how I am most using AI right now: job coach. I like having something I can talk through specific situations with where I don’t feel like I am burning out a friend. It’s not about output, it’s about the things I would have previously just not worked through. Like the right way to approach a CEO of a company I want to work at. And how to think about rejections within 24 hours of an application. When is the right time to pivot? It’s like having someone to ask for advice that has a literal encyclopedic knowledge and there are no social consequences of not taking the advice. I think that’s a pretty cool addition especially for those of use who are neurodiverse. That mother who asked for comebacks to her neighbor? She didn’t use any of them. It let her vent off steam without any social cost. That seems like a good outcome. Is that how we should be using this tech? It doesn’t fundamentally change anything but, honestly, maybe it would make us a better world if we thought a bit more before we reacting to other people. AI as a meditation supplement? Introducing a pause before reaction just to see what the LLM says?
I also heard examples of using AI to give directions on the job or help write emails to coworkers. The latter was not effective because your coworkers can tell it’s not you. The former is straight up dangerous. I think it’s really important to understand that AI can’t make decisions. It can be very convincing but at the end of the day, it’s going to tell you exactly what the most common answer is, not what the right answer is. One person in the group noted that when it’s a topic, she is knowledgeable on she has observed AI to only be right about 40% of the time. This is one of the reason I will use the grammar checker in Word to correct my structural problems but I don’t let AI write my blog posts.
Research Assistant
The teachers in the group complained about students feeding homework assignments, especially essays, into AI and then having to use AI detectors to determine which kids were doing this. Meanwhile, the parents described how their kids would use AI to start a project and then laboriously rewrite what the AI wrote to avoid the detectors. This whole pattern seems to miss the point of what the AI is good at and what is clearly cheating. Generating your own ideas and then using AI to help structure those ideas coherently, e.g. having it help create an outline or review and offer revisions, seems like a good use. Plugging in the prompt and then turning in the paper just seems like a missed opportunity to actually learn the content of the class. No real difference from just googling the answer and scaping an essay from the internet. Only it’s way easier now to do that.
What was really interesting in the debate about AI in schools was that, at least with this group, it hasn’t yet matured to the point of teaching children how to make smart use of technology. There is a lot of reactivity to the tools kids have access to now. Both of my kids are in Girls Who Code and have played with Scratch. We have used physical games to understand the logic patterns of coding. We have a family rule that they have time limits on using device time consuming other people’s content (ug, the Minecraft watching videos) but they can use their devices for creating. Both have played with Suno to create their own music (one plays guitar, the other sings in the choir). This is a mature reaction to emergent technologies (if I do so pat myself on the back). We aren’t there yet in having clear guardrails in our schools or in society.
Another member of the group described her personal use of AI as a research assistant. She dumps her historical writing and her research into the AI. She then prompts it so compile what she is looking for in that particular project. She is training it on her writing style. She is also checking sources again before publishing anything. She is having the AI generate a first draft she can react to and adjust. This feels like the cleanest, real, example of using AI as a research assistant. I imagine in the past someone would have used a college grad student for exactly this kind of work (pin that destruction of the employment pipeline thought for later). It’s a fascinating use case. This is a legit professional using AI to make herself more efficient but not to replace her own thinking or expertise.
Intern
Related to the research assistant, the most common usage I have seen inside companies, and engineering teams, is treating AI as a really capable intern with no business expertise. It has no context to the environment and it can’t make decisions, but it sure can generate a lot of code. I have talked to engineers who see a real value from AI in throw away scripts and, in certain very greenfield situations, writing production code with guardrails. But you can’t unleash it at a legacy system without serious consequences. So this is back to that pin I put in the research assistant discussion. I fear that AI is destroying the talent pipeline. There is a lot you learn in your early years and if we replace 0 years of experience roles, both inside and outside of tech, we miss out on those opportunities to learn and grow our talent naturally. We will see how this works out in the next few years.
Jennie’s brief diversion into a history lesson
At one point in the conversation I finally outted myself as working in tech and noted how strange watching AI use in the “civilian world” has been from where I sit. AI is one of the first technical tools I have seen in my career that has been broadly launched to the public before it has the natural pressure testing and guardrails introduced by a more technical audience. It was almost a decade between the birth of the internet (as ARPANET) to when AOL became common place (in the mid to late 90s). APIs revolutionized the way computer systems could communicate with each other in the early to mid-aughts but they still aren’t something a civilian really can make use of. Blockchain was born in 2009, but it’s really become the last few years when bitcoin started to become mainstream with platforms like Coinbase in 2012 and the ability to buy them on traditional brokerages, most notably Fidelity or Schwab) not until 2024.
AI has conceptually existed since the 1950s and Deep Blue won that chess competition against Kasparov in 1997. Machine Learning algorithms have been used by tech teams aggressively for the last two decades. But generative Ai has exploded in the last year, both inside and outside the software industry.
We are in a fascinating time with this emerging technology and I look forward to more opportunities to observe users and inventive uses. I look forward to looking back on this blog post with head shaking amusement.
Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image and as part of searching for historical information (because it’s unavoidable). In this case I prompted Claude, Gemini, and ChatGPT by giving eachmy blog post. In this case ChatGPT won. I also asked them all to give me a title and LinkedIn summary. ChatGPT and Gemini both came up with similar ideas which I iterated from.