Reflections on Technological Rot

Reflections on Technological Rot

Stefan Hodges-Kluck

by Stefan Hodges-Kluck on January 23, 2025

Now that my family and I have officially moved to Charlottesville, I’m  trying to adjust to life in a new town after 15 years in east Tennessee. I’m finding lots of great things–not least of all, some amazing terrain to bike around right outside my door–but it’s still all very new and strange to me, and I’d be lying if I said I didn’t miss my good old house and friends (and Ian’s friends) in Knoxville. But I’m looking forward to getting more acclimated to people and life in Charlottesville and central VA. 

In the midst of all of the moving stress I’ve been handling, I’ve also been thinking a lot in the back of my mind about how technology (and the world in which it is used) has been changing recently. As I notice my own user experiences in social media apps degrading, I become increasingly aware of concepts like enshittification and the rot economy, phrases coined to describe the way that big tech companies have grown their top-level profits while catering their products to paid promoters, at the cost of average user experience. Ed Zitron has a recent piece that I think nicely sums up this trend:

Our digital lives are actively abusive and hostile, riddled with subtle and overt cons. Our apps are ever-changing, adapting not to our needs or conditions, but to the demands of investors and internal stakeholders that have reduced who we are and what we do to an ever-growing selection of manipulatable metrics.

As giants like Google and Meta boast of record profits year after year, their products have become increasingly difficult to use. Promoted ads, and now AI summaries, have turned Google search from the best way to get information on the internet to a constant marketing ploy where users must scroll through ads that are barely distinguishable from the search results they wanted to access. Facebook has turned from a place to keep in touch with friends and family members to a feed of ads and memes (also not always distinguishable) that has increasingly little obligation to provide factual or beneficial information – take, for example, Meta’s recent decision to remove fact-checking. Meanwhile, these companies do all this while laying off tens of thousands of employees in the name of “growth”.

I have to confess that these trends have been causing a bit of an existential conflict for me as a software engineer. I love what tech has done for me–it’s given me a new career after academia, a great way to scratch my desires for research, learning, and problem-solving as I explore the ever-changing and deepening world of computers. I am still excited about programming, and I’m eager to keep growing my career by exploring new projects and delving into different tools. At the same time, I have been becoming increasingly frustrated with products purporting to make our lives easier, all the while increasing costs and dragging down experiences in the name of higher user engagement and growth for shareholders. While I don't work on any outwardly exploitative products like misinformation bots, scam companies, or gambling apps, it's sometimes hard to be fully excited about new changes in technology when it feels more and more like those new changes are resulting in degraded experiences that suck end users in with the sole intent of keeping them engaged with content as long as possible, the way grocery stores move around products in order to confuse people and keep them shopping longer. 

I have held a relatively positive view of technology for most of my life. I loved computer games as a kid, and while I didn't enter the programming world until much later in life, I always had sort of an amateur curiosity. In around 2008, a number of my computer nerd friends started a game dev company. I thought it was cool that their passion led them to making their own business, and I was maybe a little jealous of their success, as I was stuck in retail, struggling to make rent and pay student loans in the midst of a recession. When I switched careers in 2018, tech offered me a shining world of promise. I loved the dopamine hit I felt getting code to work–building apps, solving problems, learning how things work, figuring things out. After losing some of my identity when I left academe, software gave me an outlet where I could satisfy my creativity and curiosity, and actually get paid to do it.

I never thought I had any delusions about the motives of tech companies, either before or after pursuing software as a career. I always knew that the giants of FAANG (Facebook, Amazon, Apple, Netflix, and Google) weren’t making products out of the goodness of their hearts. But I used to hold an underlying assumption–one that now, in retrospect, seems quite naive–that the relationship these mega-companies had with our population was at least somewhat symbiotic. I thought–I hoped–that FAANG’s growth meant, on the whole, the growth of humanity. After all, the rise of smartphones ushered in an era where we could all carry computers with us in our pockets, with seemingly endless abilities to access wealth and direct our lives. Now, as I see leaders transform their products from ones that provide me content and information that I want, to ones that dictate to me what content and information I should want, the relationship feels much less symbiotic.

The rise of AI–and the bandwagon around it–has certainly had its part to play in my disillusionment with the tech world. To be sure, AI products have certainly benefited me as an engineer. It certainly appears that the software engineers who grow the most over the next few years will be the ones who can leverage AI tools to improve their workflows without compromising on the quality of what they produce. I am a fan of Copilot’s autocomplete suggestions when writing code and Warp’s AI-assisted terminal commands. But I fear that on the whole, AI may be simply reflecting (or even accelerating) the ways in which our user experiences are deteriorating. I love the idea of democratizing software so that anyone can program, but I don’t see AI doing this. AI is great at confidently bullshitting its way through conversations like a student who hasn’t done the reading but thinks he can drive class discussion because he glanced at the book’s Wikipedia page 5 minutes before class. Its boosters will tell you that these problems with AI will get better as models get trained better, but the more content they create, the more they will get trained on their own flawed data, all the while consuming a massive amount of power with little regard for the environment. Meanwhile, it seems like it’s only a matter of time before we see chat agents corrupted by promoted content the way that Google search has been. 

What’s ironic is that for all that its promoters rave about how LLMs “act human” in their behavior, I am seeing a trend where humanity is being drained out of the tech world. For me, a lot of the excitement of technology has always been about the process as much as the product. Building apps is a collaborative and communal process that involves personal research and communication with other people to iterate on a product over and over again to continue to make it better. While AI can certainly be a part of this process, I fear that as long as this tech is tied to the greed of those at the top, it is going to be about cutting jobs, replacing human labor, and hiding genuine human interactions in a flood of promoted crap shoved down social media feeds. 

Of course, even as I type this, I know that I am myself subject to all of these problems. I still have to fight to not blindly scroll down my social feeds, hoping to find fun updates and content from my friends, but actually getting sucked into reading memes and comics that pop up in my feed the more I stop to look at them. I have to fight not to take Google’s AI summaries or ChatGPT’s responses at face value when I want an easy answer to a question instead of wondering where the information comes from. And as much as I hate how streaming has become the new cable, I still enjoy shutting down my brain to mindless crap on Netflix at the end of a long day. As both producer and consumer of software, I often feel like both a victim and a perpetrator of the rot that is plaguing our technology. 

That’s really how I feel about living in 2025 America, in an age of increasingly unregulated greed and growing authoritarianism: both a victim and perpetrator. I hate so much of the world we are living in now, but I can’t pretend that I don’t play my little part in the rot. I’m still finding ways to kindle the love for learning and excitement for building things that software has offered me. But I don't want to do this if it means turning a blind eye to the ways in which technology has been either passively complicit–or, as is more likely, actively engaged in the rise of inequality and misinformation that have taken such a great toll on us all already. I don’t want to be another cog in a big tech machine that gradually chips away at regular users’ experience in the name of growing margins and impressing shareholders. I want to feel like I’m part of the solution, not part of the problem. I want the code that I write, and the way that I write it, to work in service of making end users’ (and my own) life easier. I’d like to make sure that technology doesn’t lose its human touch. I hope that what I write in 2025–code as well as blogging–will play some tiny part in this endeavor.