• 2 Posts
  • 959 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle


  • The whole thing has a vaguely ex-catholic vibe where sin is simultaneously the result of evil actions on earth and also something that’s inherently part of your soul as a human being because dumb woman ate an apple. As someone who was raised in the church to a degree it never felt unreal and actually resonated pretty hard, but also yeah it doesn’t make a lot of sense logically.









  • Like, I could completely understand being like “I’m not comfortable with you using my name for something like this. Even though you say it’s not about me there are enough similarities to the character that it could easily mislead people - myself included, as you can see - into thinking otherwise.” There is a reasonable version of this, but insisting that “lol someone wrote a play about me” and using it for clout years later is peak sneerable behavior.

    Also I want to sneer at Yud not understanding the concept of Off-Off-Broadway vs Off-Broadway but TBH I don’t understand the way that different tiers of theatrical prestige connect and can only assume that it’s like the circles of hell in Dante’s Inferno.






  • I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you’re looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There’s no accountability and the whole pitch of automating away, say, radiologists is that you don’t have a clinic full of radiologists who can catch those errors. Like, you can’t even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they’re making to investors and the world they’re trying to build.


  • Okay but now I need to once again do a brief rant about the framing of that initial post.

    the silicon valley technofascists are the definition of good times breed weak men

    You’re not wrong about these guys being both morally reprehensible and also deeply pathetic. Please don’t take this as any kind of defense on their behalf.

    However, the whole “good times breed weak men” meme is itself fascist propaganda about decadence breeding degeneracy originally written by a mediocre science fiction author and has never been a serious theory of History. It’s rooted in the same kind of masculinity-through-violence-as-primary-virtue that leads to those dreams of conquest. I sympathize with the desire to show how pathetic these people are by their own standards but it’s also critical to not reify the standards themselves in the process.



  • This ties back into the recurring question of drawing boundaries around “AI” as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI “cycles” deep where every 30 years or so (whenever it stops being “retro”) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.

    This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.