I think it's fairly obvious to those reading this site that my background as an academic historian has shaped my current career as a software engineer. I mean, if the title “Stefan's Post-Academic Chronicles” didn't clue you in, then you can always read posts where I compare GenAI to translating ancient texts, or the philosophical concepts of active and contemplative lifestyles.
So you may be surprised to learn that I once regretted pursuing my PhD.
When I got my first developer job in 2019, I felt like I had finally found my calling. Not only was I able to solve problems and learn new things, but I could do it without having to constantly worry about whether I had any chance of getting paid for my labor. I felt that if I had just majored in Computer Science and not wasted a decade of my life pursuing useless knowledge for poverty-level wages, I would have been a lot better off. All my grad school views about the value of the Humanities in producing critical thinkers, I thought, were hollow ideals not practical for our cruel and capitalistic society. If I could have gone back and done it again, I would have just gone straight for an education that led directly to gainful employment.
I find it interesting that over the past two years, the rise of GenAI has given me a new appreciation for my humanities background.
There is a good recent NY Times op-ed by professor and columnist Tressie McMillan Cottom in which she highlights how the grandiose promises of AI to revolutionize academic work have resulted in just another example of “mid” tech that can be used to simplify mundane tasks such as managing calendar appointments and writing emails. Yet the “midness” of this tech, Cottom argues, becomes more threatening when people hype AI as a way to bypass expertise:
A.I. is already promising that we won’t need institutions or expertise. It does not just speed up the process of writing a peer review of research; it also removes the requirement that one has read or understood the research it is reviewing. A.I.’s ultimate goal, according to boosters like [billionaire Dallas Mavericks owner Mark] Cuban, is to upskill workers — make them more productive — while delegitimizing degrees. Another way to put that is that A.I. wants workers who make decisions based on expertise without an institution that creates and certifies that expertise. Expertise without experts.
We all know it’s not going to work. But the fantasy compels risk-averse universities and excites financial speculators because it promises the power to control what learning does without paying the cost for how real learning happens. Tech has aimed its mid revolutions at higher education for decades, from TV learning to smartphone nudges. For now, A.I. as we know it is just like all of the ed-tech revolutions that have come across my desk and failed to revolutionize much. Most of them settle for what anyone with a lick of critical thinking could have said they were good for. They make modest augmentations to existing processes. Some of them create more work. Very few of them reduce busy work.
Even worse, Cottom argues, in today's political environment, this argument is used to justify replacing workers and giving more work to those that remain:
A.I. may be a mid technology with limited use cases to justify its financial and environmental costs. But it is a stellar tool for demoralizing workers who can, in the blink of a digital eye, be categorized as waste. Whatever A.I. has the potential to become, in this political environment it is most powerful when it is aimed at demoralizing workers.
Cottom eloquently lays out so much of what I hate about the GenAI craze: a widespread malaise where tech leaders tell us we will all magically become 10x more productive with tools that, at their best, provide some modest optimizations and, at their worst, perpetuate disinformation, dishonesty, and excuses for widespread layoffs. Even in software development, where GenAI has considerably more application than in academia, the hype is drowning out the benefits. While the “vibe coders” brag about making an entire production app within an afternoon, those of us who want our apps to work when users do anything other than look at them have the same challenges that have always plagued software development: translating business requirements, building with scalability, handling edge cases, investigating bugs, accommodating new features, rinse, repeat.
The thing is, app development’s complexity is twofold. Of course, there’s the technical complexity. It takes a great deal of work to not only write code, but to maintain, improve, fix, and deploy it over time. LLMs are certainly helpful in handling some of this complexity, and appear to be improving rapidly (so long as there is a skilled programmer behind the wheel). But app development is also complicated because the world in which people create and consume apps is complicated. Coming up with good ideas, architecting software for long-term success, testing various situations both expected and unexpected, and continuously improving products iteration after iteration, are all activities that require a lot more than a good prompt. Sure, prompts can help along the way, but we need experts behind the prompts.
This is where I'm particularly grateful for my background in the humanities. The oft-cited “critical thinking” selling point barely scratches the surface of what the humanities have done for me. I've spent the last 15 years of my life learning how much more complex the world is than I had originally assumed. Whether investigating ancient texts or cutting-edge technological systems, I have been constantly pushing the boundaries of how I view the world, and challenging my previously-held assumptions. I have played the long game, working bit by bit on large, open-ended projects that present benefits both in their completed products and in the processes taken to achieve them. I have developed a healthy suspicion for products and people that claim to offer solutions that will magically solve their problems without taking the time to understand what those problems really are. While I still fall into the trap of rushing into quick and easy solutions, I have the framework to help me pull back and re-evaluate when I need to.
It truly is tragic that our government is eviscerating institutions that have helped foster this sort of thinking, like the Department of Education, the IMLS, and the NEH. I’m inclined to see the same anti-labor hammer at work here that Cottom warns GenAI is becoming in this dangerous political environment. Limiting people’s access to education in general–and humanities education in particular–limits their ability to question, investigate, and expand. With the wrong mindset, an LLM can do the same, restricting people who assume, based on what the businesses pedaling them promote, that it will magically allow them to be experts without effort or training. Just look at the logic behind the administration’s recent justification for tariffs to see how this can go wrong.
As a programmer, I’m told that this technology will make me 10x better at my job, and to be sure, there are certainly places for it in my daily work. But as a scholar, I can’t help but worry about the harm behind the hype. If we want GenAI to actually benefit complex society, we need people who are trained to think about complex problems. We need the Humanities.