• 2 Posts
  • 950 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle






  • I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you’re looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There’s no accountability and the whole pitch of automating away, say, radiologists is that you don’t have a clinic full of radiologists who can catch those errors. Like, you can’t even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they’re making to investors and the world they’re trying to build.


  • Okay but now I need to once again do a brief rant about the framing of that initial post.

    the silicon valley technofascists are the definition of good times breed weak men

    You’re not wrong about these guys being both morally reprehensible and also deeply pathetic. Please don’t take this as any kind of defense on their behalf.

    However, the whole “good times breed weak men” meme is itself fascist propaganda about decadence breeding degeneracy originally written by a mediocre science fiction author and has never been a serious theory of History. It’s rooted in the same kind of masculinity-through-violence-as-primary-virtue that leads to those dreams of conquest. I sympathize with the desire to show how pathetic these people are by their own standards but it’s also critical to not reify the standards themselves in the process.



  • This ties back into the recurring question of drawing boundaries around “AI” as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI “cycles” deep where every 30 years or so (whenever it stops being “retro”) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.

    This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.






  • Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it’s dark alchemy. And while it’s not that the rabbit hole doesn’t go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

    I’m not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can’t do what they do so that you don’t ask the incredibly obvious questions about why it’s so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don’t know what kinda excuse the business idiots and political bullshitters are going to come up with.



  • One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this “tool”. It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.

    This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because they’re the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.