

The quote that sticks in my head is
You are not expected to believe any of this stuff, but rather to believe in the predatory utility of saying it.


The quote that sticks in my head is
You are not expected to believe any of this stuff, but rather to believe in the predatory utility of saying it.


Ah, the eternal curse.
“You sound like you lead a very interesting life”
“…yeeeeesss?” (Closes 50 Wikipedia tabs that relate to literally nothing you intend to do)


Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but I’m pretty sure it’s done the rounds already. Not that I’m complaining, given how much these guys bitch about science fiction and adjacent subjects.


Contra Blue Monday, I think that we’re more likely to see “AI” stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.


I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you’re looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There’s no accountability and the whole pitch of automating away, say, radiologists is that you don’t have a clinic full of radiologists who can catch those errors. Like, you can’t even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they’re making to investors and the world they’re trying to build.
Okay but now I need to once again do a brief rant about the framing of that initial post.
the silicon valley technofascists are the definition of good times breed weak men
You’re not wrong about these guys being both morally reprehensible and also deeply pathetic. Please don’t take this as any kind of defense on their behalf.
However, the whole “good times breed weak men” meme is itself fascist propaganda about decadence breeding degeneracy originally written by a mediocre science fiction author and has never been a serious theory of History. It’s rooted in the same kind of masculinity-through-violence-as-primary-virtue that leads to those dreams of conquest. I sympathize with the desire to show how pathetic these people are by their own standards but it’s also critical to not reify the standards themselves in the process.


The whole concept of “race science” is an attempt to smuggle long-discredited ideas from the skull measurement people back into respectable discourse, and it should be opposed as such. Calling it pseudoscience is better, but it’s even better to just call it straight-up racism.
Or: Nazis don’t even deserve the respect we give to cold fusion cranks, free energy grifters, and homeopaths. Their projects and arguments are even less worth acknowledging.


This ties back into the recurring question of drawing boundaries around “AI” as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI “cycles” deep where every 30 years or so (whenever it stops being “retro”) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.
This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.


I feel like there’s got to be a surreal horror movie in there somewhere. Like an AI-assisted Videodrome or something.


This isn’t studying possible questions, this is memorizing the answer key to the test and being able to identify that the answer to question 5 is “17” but not being able to actually answer it when they change the numbers slightly.


God I remember having to cite RFC at other vendors when I worked in support and it was never not a pain in the ass to try and find the right line that described the appropriate feature. And then when I was done I knew I sounded like this even as I hit send anyway.


It’s kind of a shame to have to downgrade Gary to “not wrong, but kind of a dick” here. Especially because his sneer game as shown at the end there is actually not half bad.


Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it’s dark alchemy. And while it’s not that the rabbit hole doesn’t go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.
I’m not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can’t do what they do so that you don’t ask the incredibly obvious questions about why it’s so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don’t know what kinda excuse the business idiots and political bullshitters are going to come up with.


In a world where technofascism stalks the halls of power like a fedora-wearing xenomorph it is good to see a reminder of the original context of these discussions: making Yudkowsky and friends feel important without ever actually doing anything important.


One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this “tool”. It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.
This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because they’re the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.


That’s fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: it’s because it doesn’t actually work)


Alex Avila really is out here fully being the postmodern neomarxist Jordan Peterson warned you about and I am so goddamn here for it.


Also tell me more about how you don’t have a lower-class or nonwhite-coded accent.


The whole list of “improved” sources is a fascinating catalogue of preprints, pop sci(-fi) schlock, and credible-sounding vanity publishers. And even most of those appear to reference “inner alignment” as a small part of some larger things, which I would expect to merit something like a couple sentences in other articles. Ideally ones that start with “so there’s this one weird cult that believes…”
I’m still allowed to dream, right?
The Forrest Gump of American Weirdos is a pretty solid description here, yeah. Also how had I not heard about the fucking Rajneeshis before now?