ChatGPT cannot imagine freedom or alternatives; it can only present you with plagiarized mash-ups of the data it’s been trained on. So, if generative AI tools begin to form the foundation of creative works and even more of the other writing and visualizing we do, it will further narrow the possibilities on offer to us. Just as previous waves of digital tech were used to deskill workers and defang smaller competitors, the adoption of even more AI tools has the side effect of further disempowering workers and giving management even further control over our cultural stories.
As Le Guin continued her speech, she touched on this very point. “The profit motive is often in conflict with the aims of art,” she explained. “We live in capitalism, its power seems inescapable — but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.” That’s exactly why billionaires in the tech industry and beyond are so interested in further curtailing how our words can be used to help fuel that resistance, which would inevitably place them in the line of fire.
[…]
The stories and artworks that resonate with us are inspired by the life experiences of artists who made them. A computer can never capture a similar essence. Le Guin asserted that to face the challenging times ahead, we’ll need “writers who can remember freedom — poets, visionaries — realists of a larger reality.” Generative AI seems part of a wider plan by the most powerful people in the world to head that off, and to trap us in a world hurtling toward oblivion as long as they can hold onto their influence for a little longer.
As Le Guin said, creating art and producing commodities are two distinct acts. For companies, generative AI is a great way to produce even more cheap commodities to keep the cycle of capitalism going. It’s great for them, but horrible for us. It’s our responsibility to challenge the technology and the business model behind it, and to ensure we can still imagine a better tomorrow.
If you want to create really good art that is as close as it can be to the truth or your heart, you cannot be too caught up in having to earn enough for food and shelter. That is what gets more difficult for more people in a hyper capitalist society.
It’s not so much that tech is used to create art. I don’t mind CGI, AI-generated stuff, drum computers … all can be used to create great art.
But very little great art is created these days. Every movie seems the Xth derivative of some superhero shit. Almost like a super-in-your-face-hero-narrative pushed to the extreme. There is a distinct lack of new narratives in mainstream, it all seems so dystopianly tired and worn-out.
Or is the better content just hard to find between all the junk that’s out there? I don’t think the tech is directly responsible, but the artist being forced into thinking of himself as the creator of a product that has to satisfy n people, or alternatively having to be the creator of a product of his liking that might never be completed as the artist has to work to finance his passion.
What the use of complex tech can to some people: it discourages them from creating earnest art with simpler means, because they believe it can’t complete with the ‘real’ or the ‘big’ stuff. These days I kind of force myself to draw with pencils and play acoustic instruments just to get back to something really simple and screen-less, and it has been a delight, but I’ve also gotten stuck in some ‘got-to-buy-more-to-get-better’ kind of loop in art and hobbies. Some feel the pressure to ‘generate enough content’ so as to not be drowned out by faster creators, but that turns art into a race and is rather silly.
A passionate and well cared for artist can create great art using a stick and/or AI, anyways, and the art just happens to take the time it needs to be created. We could do with a little less capitalism to take the pressure of the artist.
I am tired of correcting the same misconceptions and I love Le Guin too much to give her crap about it. I’ll just make general remarks.
The reason we all (AI researchers and engineers) are excited about LLMs is not that they can mash up and merge existing piece of work. It is because they can recognize very high level abstraction patterns, understand these, merge them with others and create coherent outputs based on these new patterns.
Even if fed only sexist and capitalist literature, it is capable of imagining things that go beyond that. It can extrapolate the notion of freedom expressed by a rich male narrator into freedom for poor women. I would argue that this is very similar to the way human authors explore new horizons: adapt known patterns to new objects. New patterns are learned from our dataset. Right now for LLMs it is only text, but 2024 is looking like it will be the year of the AI-enhanced humanoid robot. Get ready for a ton of subjective experience datasets.
Yes, we can use it to produce unimaginative works mimicking those produced by the entertainment “industry”. Fun fact: these models will soon run on your local machines (for images they already do) and these industries will be destroyed by the post-scarcity the generative model actually cause.
The great surprise of the AI revolution is that it is barely capitalistic, despite what the dystopians say. Companies communicate a lot about their private models, but in a scenario that no SciFi author would have dreamed, it turns out that research in the field is pretty open and that open source models are on the tail of private ones.
Software is probably the domain that is the most advanced in terms of post-capitalism: open source provides non capitalistic post-scarcity, and AI spawns from this domain.
People should not be blind to this fundamental aspect of the field.
Even if fed only sexist and capitalist literature, it is capable of imagining things that go beyond that. It can extrapolate the notion of freedom expressed by a rich male narrator into freedom for poor women.
I don’t think this is true at all…
What does “imagining things that go on beyond that” mean? Simply swapping out hierarchical control from men to women, isn’t creating or “imagining” a completely different story model.
If you fed it nothing but sexist and capitalist lit then in all likelihood it will just give you more sexist lit, or alternatively lit that is specifically anti sexist/capitalist.
Give it an example of an oppression and of people freeing from oppression. It can they apply that pattern to different oppressions even without having seen it applied there before.
Of course you need to prompt for it, e.g. by saying “Spot oppression in places we don’t label it as such and imagine new narratives of liberation from these oppressions”. LLMs are not given agency, not really out of technical difficulty but mostly to not freak people out too much. But the fact that this capability exists in such a simple model is just mind blowing.
If you fed it nothing but sexist and capitalist lit then in all likelihood it will just give you more sexist lit, or alternatively lit that is specifically anti sexist/capitalist.
Well that’s the surprising thing. You can prompt it for things it has never encountered. You can make it generate left-handed supremacist leaflets or let it produce arguments in favor of stuffing tofu in your ears. You can make it generate Shakespearian gay romance or theological arguments for the sainthood of Obiwan Kenobi.
Give it an example of an oppression and of people freeing from oppression.
Right, but this is fascist literature you are theoretically inputting. I don’t think there’s going to be a lot of good examples of celebrating people escaping oppression.
Of course you need to prompt for it, e.g. by saying "Spot oppression in places we don’t label it as such
If you are inputting solely fascist propaganda… the machines definition of what oppression means is going to be inherently different than our understanding. Ideological definitions like freedom and oppression require historical and cultural context that the machine has no access to. And if they are receiving any context from your inputted information, it’s going to be influenced by the compilation of writers.
Well that’s the surprising thing. You can prompt it for things it has never encountered.
What do you mean by “encountered”? It can’t just imagine the correct definition of a word it has no context about.
You can make it generate Shakespearian gay romance or theological arguments for the sainthood of Obiwan Kenobi.
Because you have fed it countless amounts of reference points and context about those subjects. If you fed it nothing but literature written by neo nazi, it’s not going to have a clue who Shakespeare is.
I would argue that “fascist literature” is a contradiction of terms. I never mentionned fascism and think it is a trivialisation of the term to equate sexist capitalism with it. I was thinking about things like Heinlein style scifi, pretty male centric, pretty pro-capitalist but one of the stories resolve around a former slave helping break a slave ring.
Ideological definitions like freedom and oppression require historical and cultural context that the machine has no access to. And if they are receiving any context from your inputted information, it’s going to be influenced by the compilation of writers.
Well, yes, like a human author I would argue. A human author who lived all his life in an authoritarian state would have a very limited and naive understanding of what freedom or fight for freedom could be.
What do you mean by “encountered”? It can’t just imagine the correct definition of a word it has no context about.
I mean “that is present in its training dataset”. I was talking about non-encountered combinations. Indeed, it can’t know the definition of new word, but if you provide a definition of it, it will be able to talk about it and imagine things about it. It it never encountered unicorn in its dataset, describe it as horses with a single horn on the forehead and magical powers and it will have no problem writing things about them.
Because you have fed it countless amounts of reference points and context about those subjects. If you fed it nothing but literature written by neo nazi, it’s not going to have a clue who Shakespeare is.
Obviously. Neither could a human. But I am pretty sure there is no theological argument for Obiwan Kenobi’s sainthood in its dataset. It knows about sainthood, it knows about starwars, and the interesting thing is that it knows how to combine it.
would argue that “fascist literature” is a contradiction of terms. I never mentionned fascism and think it is a trivialisation of the term to equate sexist capitalism with it. I
I don’t see how fascist literature is a contradictions of terms… fascist have famously written quite a few books.
Nor do I really think it trivializes fascism to conflate sexist capitalist books as fascist literature. The vast majority of media that fascist regimes utilized as propaganda were just American movies and literature that had undertones of sexism, capitalism, and like almost all fictional literature, a protagonist that had the ability solve all the books problems.
Heinlein style scifi, pretty male centric, pretty pro-capitalist but one of the stories resolve around a former slave helping break a slave ring.
Heinlein isn’t particularly pro capitalist or male centric, especially for it’s time… he was actually kinda famous for writing about strong female characters that bucked the social and sexual norms for the times. The only “capitalist” book he really wrote was the moon is a harsh mistress, and that had more to do with governments than markets.
Well, yes, like a human author I would argue. A human author who lived all his life in an authoritarian state would have a very limited and naive understanding of what freedom or fight for freedom could be.
Right, but you didn’t claim that a human who had never been exposed to freedom could write a book that accurately portrays freedom…
It it never encountered unicorn in its dataset, describe it as horses with a single horn on the forehead and magical powers and it will have no problem writing things about them.
Lol, okay. That’s quite a bit different than what your original claim may lead people to believe.
Obviously. Neither could a human. But I am pretty sure there is no theological argument for Obiwan Kenobi’s sainthood in its dataset. It knows about sainthood, it knows about starwars, and the interesting thing is that it knows how to combine it.
Again, you are utilizing language that is not really an accurate depiction of what’s happening. It’s not making a theological argument, it doesn’t “know” that it is deifying a fictional character.
There are reference points to obiwan, people like obiwan, people think he’s great, people say he looks like a religious character other people like, people like saints. It’s not analyzing the characters and making new connections no one has ever thought about, it’s just reflecting data and popular connection others have already inferred.
I think people tend to drape machine learning in the ornamentation of human consciousness, but it’s just buying into your own marketing. I think it’s great for pattern recognition, but to think it’s going to create meaningful art that isn’t just plagiarism is naive. Just as naive as to think that it’s possible to be a tool of leftist ideology .
It’s what the capitalist have wanted since slavery became illegal, a worker that they don’t have to pay that can ape a human like connectivity. How are the workers going to seize the means of production if the workers can literally be programmed?
It’s what the capitalist have wanted since slavery became illegal, a worker that they don’t have to pay that can ape a human like connectivity. How are the workers going to seize the means of production if the workers can literally be programmed?
What would you need to “seize” if the models are open source and purely software? You need some machines, but GPUs are cheap compared to industrial equipment. That precise battle has been won without ever being fought.
What would you need to “seize” if the models are open source and purely software?
That’s my entire point… they are attempting to replace writers, who are currently the producers of the wealth in the current industry. By going on strike they can collectively demand more control of how the profit is distributed.
That precise battle has been won without ever being fought.
How have you won anything? You just theoretically erased thousands of jobs. You aren’t replacing the logistical system required to profit from the writing, you aren’t doing anything for the worker class but stealing food from their mouths.
How does replacing workers with machines equate to a leftist win? We’ve automated tons of industries, how has that worked out for the worker or unions?
Nice and informed comment. Completely agree, specially with the part about software being the domain most advanced in terms of post capitalistic post scarcity.
Thanks!
Yes, I am surprised it is not discussed more in the anti-capitalist discussions. Software, with its null marginal cost, is a laboratory of post-scarcity. The fediverse is kind of the next frontier of it.
It’s been possible to run useful LLMs on your own local machines for quite a while already. They’re not up to the general level of competence as the big commercial LLMs like ChatGPT, but they have certain niches where they excel - hobbyist LLMs usually don’t have the sorts of “no mean things! no sexy things!” fetters that encumber the commercial LLMs, for example. And they’re trainable so you can customize their knowledge and style more than you can with the commercial ones.
I’m looking forward to them becoming more user-friendly, though. Takes a lot of technical know-how to get these things working currently.
Working on that. I have found this interface to be pretty straightforward.
There’s also KoboldAI.
If I’d want any one person to design our future, it would have to be Le Guin. She was such a great person, and her stories are those rare ones that expand both your mind and your heart.
She seems to have some pretty big misunderstandings of how generative AI works so I think I’d disqualify her from designing our future on that basis alone.
She’s been dead more than 5 years, this text is from 2014.
That certainly doesn’t help her case.
Pretty big to assume humans are capable of genuinely new thought. Seems to me people also just modify and regurgitate shit they heard other people say.
Once again, the “plagiarism machine” misunderstanding of how LLMs operate. It’s simply not true. They don’t “mash up” their training material any more than a human “mashes up” their training material. They learn patterns from their training material. Is a human author who writes yet another round of the Hero’s Journey creating a “plagiarized mash-up” of past stories? When a poet writes yet another sonnet, are they just aping past poems they’ve seen?
And even if it were so, complaints like these are self-contradictory. If LLM output really is just boring old retreads of stuff that went before, why are they a “threat” to skilled human authors who can produce new material? If those human authors really are inherently better than the LLMs, what’s the big deal? It’s not like it’s a new thing for there to be content farms and run-of-the-mill stories churned out en mass. Creative authors have always had to compete with that kind of thing. Nobody’s “curtailing” them, they’re just competing with them. Go ahead and compete right back.
I generally agree on your stance regarding AI (in the end it is another tool for human artist to use), but the problem with competition as you describe is that it competes on price with the lower-entry level jobs artists might find. Thus in turn there is little opportunity for human artists left to learn on the job and reach levels surpassing AI generative art.
While of course a lot of art is done not as a commercial endeavour, the prospect of turning it into some sort of income generation (or fame) is usually a motivating factor for artists starting out. With this motivation gone, many will turn to other professions, which in the end is likely a loss for the overall society. A good example for that are comic book artists in France Vs. Germany. In France there is a rich scene of comic book artists with regular publications, mainly because there were some commercial publishers early on and people could aspire to a “career” in being a comic book artist (with varying success…). None of that exists in Germany as far as I know, and the reason is that young people don’t think it is worth learning how to draw comics, and this then becomes a chicken <-> egg problem.
I don’t believe this is going to happen soon. I was quite worried for a while that my translation job could be killed by ChatGPT, but nothing changed that hadn’t changed already a few years ago with so-called ‘machine translation’. What did that mean for the writer/translator? I had to negotiate a price for a new service with agencies, for ‘machine translation post-editing’ - so I just made it as expensive as translation, fuck you. I’m happy to use machine translation for my work, but choose my own engine and check the work before sending it out there, because there always will be funny mistakes. AI is good enough to save me lots of typing work, but in no way good enough to be left alone to produce any text ready for publishing.
In the case of the writers, who are now AI prompt inventors and AI text pre-publishing editors (here, have a fancy brand-new acronym: ATPE) that would mean: negotiate your price for any jobs (make sure that feeding a prompt into an AI costs the same as writing an article) so you and your family can live comfortably. Don’t let them eat you alive.
Well, yes… but that is the momentary insider view.
I was referring to the impression young people have when they decide what they want to invest time in to learn. These views are often highly distorted from reality, as any insider will acknowledge in retrospect, but that doesn’t make them less relevant. Also young people will try to extrapolate at least a few years into the future (time it takes to finish a art degree for example) and AI will likely get better in these years.
‘previous waves of digital tech were used to deskill workers and defang smaller companies’ just isn’t even close to true, it’s far easier to access learning resources, tools, and a final market for your product or skills than it ever has been.
I hate when people look back and say things like ‘technology took away jobs’ when the reality is it’s what ended the brutal privation and poverty as described in works like Jude the Obscure and Ragged Trousered Philanthropists - imagine how different Jude or Robert Tressell’s lives would have been with access to the educational and community organisation resources we all take for granted. Neither of those people had the slightest chance to reach their potential and that was the reality of working class life in those eras, today we all have free access to almost endless learning resources on any subject - we have for many things the tools freely available or at cost thanks to open source; coding, digital art, writing, publishing, film-making… If Joel Haver had been born Robert Tressell or Jude the Obscure then he’d certainly have never made a movie (anachronism aside) but today he and a million other normal people from average lives are able to create art and express themselves freely.
If anything tech has given fangs to little companies and independent creators, i’ve watched more Joel Haver movies than Marvel movies - one guy and his friends using hobby grade gear, home computers and a lot of passion is enough to make something that people all over the world can enjoy - tech has been fantastic for creators.
and when creators are able to use AI tools to make that even easier and to increase the scope of their creations it’ll be the big companies that suffer not the small creative groups full of passion, ideas and strong connection to the world they live in.
Sorry if this is a little off topic, but it’s something I’ve been thinking a lot about lately. I think the way I make my photobashes is closer to how people think AI works than the AI actually is. The nature of a photobash is that everything has been cut from photographs, I make my art by cutting up other people’s art (if you consider free textures, stock photos, and home depot, lowes, and amazon advertisements to be art, though I often do grab bits and pieces from farther afield than that) and I’ve kind of been expecting some backlash for awhile now, especially over on the solarpunk subreddit where they really seem to hate AI for the ‘plagiarism machine’ reasons. I think some folks who make collages really enjoy the bits of context the different elements bring with them, but I’m kind of the opposite. I like the way this process strips the bits and pieces of their original context and remixes them into something new. I feel like this is how I’ve always made things, even writing. Nothing is spun into existence out of thin air. I pull concepts, plot beats, character traits and more out of stuff I love, or often stuff that I think had potential but missed the mark, and jam it together into something new.
Maybe because of that, or because I never had it make the whole scene, AI never felt like a huge departure from my processes, just another tool in the toolbox. If it’s borrowing styles, rules of design, or color pallets, at least it’s not cutting the source up directly. I’ve used a friend’s midjournybot a few times in the photobashes, to generate bits and pieces I couldn’t find or wouldn’t use, mostly for in-world artwork. I think it’s pretty amazing, the things it comes up with, and though I wouldn’t use it to generate a whole scene because it wouldn’t get the details right, it’s really useful when I want to include a type of art in a scene but don’t have specific design in mind. I’ve had it make wood panels carved with leaves and stained glass windows for the kitchen, and a spraypaint mural and a mandala pattern for the parking garage scene. I take what it makes, cut it up, transform it as necessary, and layer it into the scene. I think I mostly use it for in-world art because, like I said, I like the way this process cuts bits out of one picture and gives them a new job in a new picture. But I can’t do that with artwork – to include a carving or a stained glass window would be to include the whole thing, rather than just a piece of it, and dropping the whole thing into a new context feels a lot more like actual theft, or like it could change the original in some way. I don’t know.
I guess my point is that imagination has always felt like this to me? And that people will keep imagining better futures, no matter what tools they coexist with? Heck I’m making my depictions of the future entirely out of photographs of things that exist now. I don’t doubt that there are billionaires and tech types who’d love to remove the artists from the production of art, in order to gain full control over the messages it conveys. But there’s already plenty of lowest-common-denominator mass-consumption art being cranked out by their companies. And they’ll use their money to produce dross or propaganda whether there are other humans in the loop or not. Like a tattoo artist told me once, ‘some days you make art, some days you make rent.’
(I don’t honestly know if this will reduce the motivation for people to learn to draw in order to get the scenes in their heads out where people can see them, I’ve been using image manipulation tools since I was a kid, but I still started teaching myself to draw in college.)
I don’t love when the arguments against AI still treat art like a product, though I acknowledge that in the world we live in, everything is a product. It just feels like another flavor of the kind of capitalist thinking that treats artists as replaceable parts. At the same time, I understand that people need money to survive and to keep learning and improving so they can put art they love out in the world. I know I’m biased because my career isn’t on the line here. I’ve got the luxury of doing this for free, and releasing it for free because I’m not dependent it for an income. I can make the solarpunk photobashes CC-BY because there’s no opportunity cost there, and it might help them to spread around and influence the overall messaging of solarpunk art and the first impressions people get of the genre. I’ve made sure not to tag it in any way that prevents AI from consuming it, because if I can infect its perception of solarpunk so the things it makes include values like reuse, so much the better.
I mean, without a prompt no AI does anything. There always has to be a human on the other end of every tool, be it a stick or an AI. Difference would be whether there’s a guy standing behind me telling me where to point the stick or AI, or if it’s my own decision, or the decision of the group affected by stick/AI.
Sorry Ursula - you are the past now.