As Warren Buffett might quip: only buy what you’d hold if markets closed for a decade
And once again the conservative sandiwch-heavy portfolio pays off for the hungry investor!
As Warren Buffett might quip: only buy what you’d hold if markets closed for a decade
And once again the conservative sandiwch-heavy portfolio pays off for the hungry investor!
The whole thing has a vaguely ex-catholic vibe where sin is simultaneously the result of evil actions on earth and also something that’s inherently part of your soul as a human being because dumb woman ate an apple. As someone who was raised in the church to a degree it never felt unreal and actually resonated pretty hard, but also yeah it doesn’t make a lot of sense logically.


One of the only reasons I’m hesitant to call Rationalism a cult in its own right is that Yudkowsky and friends always seem to respond to this element of cultiness by saying “oh, let me explain our in-group jargon in exhaustive detail so that you can more or less understand what we’re trying to say” rather than “you just need to buy our book and attend some meetings and talk to the guru and wear this robe…”


This is why I only hang out with groups like “terrible shit” or “bunch of self-satisfied assholes”. This has worked only to my advantage so far.


Also, were you ever actually “asked to leave” as such or did you just start to recognize the bullshit and show yourself out? Or did you, as seems to be the more common trail to sneerclub, drop offline for unrelated reasons and circle back some time later to realize you were no longer 15?


Try to prevent “slums” forming where people who don’t meet your group’s standard congregate (this generally gets more likely the later you kick out people)
I think that this nets all of us here at sneerclub an honorable mention on the list. Good job, everyone!


The Forrest Gump of American Weirdos is a pretty solid description here, yeah. Also how had I not heard about the fucking Rajneeshis before now?


The quote that sticks in my head is
You are not expected to believe any of this stuff, but rather to believe in the predatory utility of saying it.


So would me in the morning doing impressions for my cat be Homeopathic Broadway?


Like, I could completely understand being like “I’m not comfortable with you using my name for something like this. Even though you say it’s not about me there are enough similarities to the character that it could easily mislead people - myself included, as you can see - into thinking otherwise.” There is a reasonable version of this, but insisting that “lol someone wrote a play about me” and using it for clout years later is peak sneerable behavior.
Also I want to sneer at Yud not understanding the concept of Off-Off-Broadway vs Off-Broadway but TBH I don’t understand the way that different tiers of theatrical prestige connect and can only assume that it’s like the circles of hell in Dante’s Inferno.


Ah, the eternal curse.
“You sound like you lead a very interesting life”
“…yeeeeesss?” (Closes 50 Wikipedia tabs that relate to literally nothing you intend to do)


Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but I’m pretty sure it’s done the rounds already. Not that I’m complaining, given how much these guys bitch about science fiction and adjacent subjects.


Contra Blue Monday, I think that we’re more likely to see “AI” stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.


I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you’re looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There’s no accountability and the whole pitch of automating away, say, radiologists is that you don’t have a clinic full of radiologists who can catch those errors. Like, you can’t even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they’re making to investors and the world they’re trying to build.
Okay but now I need to once again do a brief rant about the framing of that initial post.
the silicon valley technofascists are the definition of good times breed weak men
You’re not wrong about these guys being both morally reprehensible and also deeply pathetic. Please don’t take this as any kind of defense on their behalf.
However, the whole “good times breed weak men” meme is itself fascist propaganda about decadence breeding degeneracy originally written by a mediocre science fiction author and has never been a serious theory of History. It’s rooted in the same kind of masculinity-through-violence-as-primary-virtue that leads to those dreams of conquest. I sympathize with the desire to show how pathetic these people are by their own standards but it’s also critical to not reify the standards themselves in the process.


The whole concept of “race science” is an attempt to smuggle long-discredited ideas from the skull measurement people back into respectable discourse, and it should be opposed as such. Calling it pseudoscience is better, but it’s even better to just call it straight-up racism.
Or: Nazis don’t even deserve the respect we give to cold fusion cranks, free energy grifters, and homeopaths. Their projects and arguments are even less worth acknowledging.


This ties back into the recurring question of drawing boundaries around “AI” as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI “cycles” deep where every 30 years or so (whenever it stops being “retro”) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.
This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.


I feel like there’s got to be a surreal horror movie in there somewhere. Like an AI-assisted Videodrome or something.


This isn’t studying possible questions, this is memorizing the answer key to the test and being able to identify that the answer to question 5 is “17” but not being able to actually answer it when they change the numbers slightly.
Call the last one A for Agency and turn the acronym into an AI history reference: ELISA.