Deadlines and Slop Work
So, this email is again a bit later than planned, but there's a reason. I've been deep in work, getting various project sent in or at least ready to be so. The book on crime is on the home stretch, have two other pieces that should be sent off soon, and various projects starting up. It's all a bit chaotic. Last time I sent off a piece on data, and today a companion piece of sorts, written within the scope of the same project. It is a piece that seems very suitable right now, as it tries to deal analytically with the category of AI slop, and engage with this in a philosophical manner. If I have the time I intend to develop it into something more extensive. Anyway, here it is, for your delectation. Stay classy now!
For a Materialist Philosophy of Slop
Since the introduction of ChatGPT as the first publicly available platform for laymen to experiment with generating texts (and later images, song, videos, and programs) through little but a prompt, we have marveled at the manner in which such materials can be similar or even (in some cases) better than those created solely by humans. This has prompted (sic)discussions regarding the future of creativity and creative work, as well as jeremiads regarding the waning of the creative professions. Much has been made of the fact that even the best outputs of the best models of generative AI (genAI) tend to gravitate towards the anodyne and the artificial, and the "lifelessness" in such outputs has been widely derided.
A key element in this discourse has been the term "slop", which today is used in a broad range of ways to describe genAI outputs. The term, which has historically referred to both liquid spillages, unappetizing foodstuffs (with high liquid content), and loose worker's garments, gained traction in the debate surrounding genAI by being championed by Simon Willison in a much-quoted blog in early May 2024, and in a scant few weeks emerged as the accepted vernacular for a particular kind of algorithmically generated material. Though no hard definition has been proffered, the common understanding is that the term refers to AI-generated materials produced en masse, with less attention paid to quality than virality. Even though some slop can be quite accomplished – as I write this people are calling a new animated series from known auteurDarren Aronofsky "high-end slop" – the term focuses on the capability of genAI to create so-so material at scale, and what this fact brings with it. The term has a familial relation to the term "spam", which referred to the tendency of certain companies to send out mass emails with little care for targeting, as well as the replication of this strategy by scammers and criminals. What is important to note, however, is that where spam (and its telephonic equal, the robocall) was both limited to a specific medium and at least somewhat easy to defend against, slop seems both more invasive and more toxic in social settings. Today, we may inadvertently listen to slop as we wake up, watch a slop video while doing our morning ablutions, read slop on Reddit as we eat our breakfast, only to arrive at the office to realize that much of what one's colleagues has produced is what today is called "workslop", i.e. AI-generated materials masquerading as the work of human colleagues.
It would here be easy to try to connect slop, and particularly workslop, to the manner in which Harry Frankfurt discussed the category of "bullshit". His point when doing so was to note that bullshit was not just lying, but instead a category of speech acts that was fully divorced from ideas about truth and falsehood. What a person talking bullshit presents are not lies, but rather a performance. Bullshit phraseology is designed to make the bullshitter look good, not to correspond in any way to the truth (positively or negatively). In this context, the spammer is markedly distinct from the bullshitter. The spammer, at least initially, may have genuinely attempted to establish a meaningful connection, albeit in a somewhat clumsy and excessively elaborate manner. Even in contemporary instances where much of what is categorized as spam is actually a scam, it is evident that the individuals disseminating these scam messages are aware of their deceitful nature. They consciously employ spam as a mechanism to identify and exploit gullible individuals. The pertinent question then arises: where does this leave slop?
I believe the key here lies in the industrial logic of it all. The social imaginaries of digitalization have tended to see this move as a post-industrial one, elevated or even transcended from the material constraints of the industrial. In fact, the industrial is today exceptionally present in the AI industry, with McKinsey-projected expenditures on data-centers alone being pegged as high as $6.7 trillion by 2030. This does not account for things such as increased usage of energy and water, nor general network construction, putting the actual, material costs on a level that would have been simply astounding to the industrialists of yore. Yet, AI and its productions are often treated as if they were 'immaterial'. The problem is to a degree conceptual. We have a word – material – that points to a number of interpretations; certain things are material, in that they are made out of material, yet other materials might primarily exist as digital and thus material only insofar as they are stored, and as the storage of digital material has gotten cheaper, this material aspect has increasingly been seen as… immaterial. It is all a bit confusing.
Slop, however, is clearly material, in several senses. It is material that is being distributed in the world, sometimes as silly YouTube-clips, sometimes as reports sent between the men and women of the organization. It is also material in that it was produced by way of a complex algorithmic processing of an enormous/industrial amount of pre-existing materials in the shape of books, blogs, and bricolage. The reason it appears immaterial, and is sometimes treated as such, is that whereas a pair of shoes or a combine harvester utilizes material in a way that at least for the lifetime of such products is expended on them (until, one hopes, recycling recaptures this expenditure), the same does not go for slop. The materials that trained our LLMs still exist, and although compute was spent for training and later production, this (as strange as it seems) is rarely counted as a material expenditure. Nor is the space it occupies on servers and so on, as this can be reused almost ad infinitum. Yet while this may be true, a materialist philosophy of slop would point to not just the two points made above – it is material in that it is the product of the industrial structure of compute, and it is material in that it exists within the media structures that define contemporary consumption – but to a third, potentially even more important one.
What is often forgotten when slop is addressed, which has also often been the case for spam, is the labor inherent in the production of the same. This omission might seem logical as the very condition of slop seems to be one of trying to escape work, but at the same time this misses a central point of the material conditions for slop, and particularly workslop. The materialist inquiry is always focused on the conditions that make something possible, and in the case of workslop, this seems acutely connected to the contemporary work organization. In such a perspective, slop appears as a logical waypoint in the industrialization of labor, now extended from menial and factory work to something like white-collar labor. Whereas the deskilling hypothesis (which holds that technological change reduces the skills needed to perform work, allowing precarization) used to mean the industrialization of manual labor, slop can be seen partly as the deskilling of professional labor such as office work, but in extension also the industrialization and thus deskilling of humanity.
Such a claim will of course engender a degree of pushback. The industrialization to which I refer does not pertain to the large-scale production of human beings, which might evoke images of baby factories and similar notions. Rather, it concerns the large-scale production of what could be interpreted as human artifacts or human-like communications. This phenomenon is already observable on social media platforms. For instance, LinkedIn has become inundated with AI-generated essays, commentaries, and ephemeral notes that create the illusion of active human engagement but primarily consist of, well, slop. According to reporting from the BBC, one in three adults use AI for emotional support and social interation. It is not a stretch to envision a future in which personal agents increasingly act to remember personal matters such as birthdays and other anniversaries for us, including sending messages in our voice and tone – not least as this is already reported as happening among the more tech-forward of us. Such agents thus perform very human gestures, such as remembering to send flowers to one's partner on an anniversary, thereby transforming these kinds of actions into an industrialized process. The scenario with LinkedIn has at times been described as the Dead Internet Theory, which posits that the once human-defined Internet has become dominated by bots. In the latter scenario, it is conceivable that this theory of morbidity could extend into real life.
I have, when discussing the identity-work of executives in the AI era, at times made reference to cases I have come across where CEOs have utilized ChatGPT and similar to deal with issues they were unsure they had the apposite human sensitivity for, such as consoling a colleague who had lost someone close or was dealing with a break-up. This has often been met with a degree of incredulity, as it seems to break a social taboo – an executive should know how to deal with serious human issues without 'cheating' with an algorithm. I have suggested this, while understandable, ignores the worse alternative, namely that of an executive botching the situation by not taking advice and coming off more inhuman than when coached by something inhuman, but often to little avail. To people, what the above sounds like, well, careslop. It might seem like care, and empathy, but the industrial component somehow makes it less human, and thus less acceptable. Like a comedian stealing jokes, we've started to turn to machines to know how to act like humans, and through this are engaging in a process of self-industrialization.
Considered in this manner, the rather unthreatening (if annoying) category of slop starts to appear as something more pernicious. It started out by machines mass-producing memes or silly pictures (such as the explosion in Studio Ghibli-style images that were generated in the wake of ChatGPT upgrading image capabilities in March 2025), and is currently flooding media channels such as YouTube and Spotify. Amazon and every social media channel known to man is already materially a slop-distribution system, and the slop is spreading. Organizations are now struggling with workslop, and teaching facilities are increasingly defined by eduslop. Careslop may be in its infancy, but is likely to scatter and sprawl rather than diminish. Vibe-coding has already as I'm writing this enabled platforms (Rork being an early one) from where you can publish apps coded and distributed with a series of prompts and a few pushes of a button – which could be called Appslop-as-a-Service (AAAS). Some are already predicting a slopocalypse, where bona fide human artefacts and communication exist as spice and variety in a world of mostly auto-generated material.
How might this be combated? I mean, spam isn't the problem it is, so there must be technological means, right? The issue is that whereas spam is relatively easy to identify, not least due to spreading almost exclusively in one, specific channel, slop is a more general and more distributed concern. A materialist philosophy might help us to identify the issue, but this does not guarantee that we can formulate a workable response. This not least as material philosophy can already tell what the problem is, namely the increasingly frictionless extractivism of contemporary capitalism. Ingrained in our society on the most fundamental of levels, this tendency for relentless optimization of the extraction of surplus value without care for the alienation caused cannot be combated with an email filter. Instead, slop may be what our society has been geared towards for the last centuries – an industrialization of everything, where notions such as art, nature, and humanity are merely resources to be used in the interminable strive for efficiencies. Viewed in this manner, slop is not a mere byproduct of a specific algorithmic technology, it is what society has been set up for. Slop is what we made, slop is our inheritance, slop is our legacy.