In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!
Are you not implying that human effort is not valuable?
Not intrinsically. A mud pie that someone spent 10 hours making does not have 10 hours of value. For equivalent use-values, their value is regulated around the average time it takes to reproduce them across all of industry. A chair that someone spent 15 hours making and a mass produced chair that someone spent 5 hours making, if equivalent use-values, would each be worth 10 hours (if this was the standard in the economy).
Labor-power and raw materials are the source of all new value proper. Use-value, on the other hand, is distinct. AI art fundamentally cannot take the place of human art, as human art’s use-value is derived from what the artist is saying and how they choose to say it. The process becomes the use. However, something like a texture in a video game is only useful inasmuch as the end user sees a texture and experiences it as a texture, the manner in which the texture was produced does not matter whether it was painted pixel by pixel, distilled from a photo, or was AI generated.
I’m not trying to use the “I read theory” card as some thought-terminating cliché, but I have actually read Capital volume 1, and am about a third of the way through volume 2. What you are describing is closer to the LTV of Smith or Ricardo, not of Marx.
I appreciate you describing the LTV distinctions between the thinkers, thank you, sincerely!
I think the problem I have with AI - and it sounds like you agree at least partially - is that it positions human creative work, and human labour in general, as only a means to an end, rather than also as an end in itself.
(I think this holds true even with something like a video game texture, which I would argue is indeed part of a greater whole of creative expression and should not be so readily discounted.)
This makes AI something more along the lines of what Ursula Franklin called a ‘prescriptive technology’, as opposed to a ‘holistic technology’.
In other words, the way a technology defines how we work implies a kind of political relation: if humans are equivalent to machines, then what is the appropriate way to treat workers?
Is it impossible that there are technologies that are capitalist through and through?
Tools are different in different modes of production. A hammer is capital in the hands of a capitalist whose workers use it to drive nails, but is just a tool in the hands of a yeoman who uses it to fix up their homestead. My driving point is that art and AI images have intrinsically different use-values, and thus AI cannot take the place of art. It can pretty much occupy a similar space as stock images, but it cannot take the place of what we appreciate art for.
Humans will never be equivalent to machines, but products of labor and products of machinery can be equal. However, what makes art “useful” is not something a machine can replicate, a machine is not a human and cannot represent a human expression of the human experience. A human can use AI as a part of their art, but simply prompting art and churning something out has as little artistic value as a napkin doodle by an untrained artist.
The products of artisanal labour and factory labour might indeed be able to be equivalent in terms of the end product’s use value, but they are not equivalent as far as the worker is concerned; the same loss of autonomy, the loss of opportunity for thought and problem-solving and learning and growing, these are part of the problem with capitalist social relations.
I’m trying to say that AI has this social relation baked in, because its entire purpose is to have the user cede autonomy to the system.
I’m sorry, but that doesn’t make any sense. AI is not intrinsically capitalist, it isn’t about cedeing autonomy. AI is trained on a bunch of inputs, and spits out an output based on nudging. It isn’t intrinsically capital, it’s just a tool that can do some things and can’t do others. I think the way you view capitalism is fundamentally different from the way Marxists view capitalism, and this is the crux of the miscommunication here.
Literally the only thing AI does is cause its users to cede autonomy. Its only function is to act as a (poor) facsimile of human cognitive processing and resultant output (edit: perhaps more accurate to say its function is to replace decision-making). This isn’t a hammer, it’s a political artifact, as Ali Alkhatib’s essay ‘Defining AI’ describes.
AI is, quite literally, a tool that approximates an output based on its training and prompting. It isn’t a political artifact or anything metaphysical.
AI is a process, a way of producing, it is not a tool. The assumptions baked into that process are political in nature.