- 24 Posts
- 224 Comments
corbin@awful.systemsOPto
SneerClub@awful.systems•OpenAI investor falls for GPT's SCP-style babbleEnglish
0·4 months agoThanks for linking that. His point about teenagers and fiction is interesting to me because I started writing horror on the Internet in the pre-SCP era when I was maybe 13 or 14 but I didn’t recognize the distinction between fiction and non-fiction until I was about 28. I think that it’s easier for teenagers to latch onto the patterns of jargon than it is for them to imagine the jargon as describing a fictional world that has non-fictional amounts of descriptive detail.
corbin@awful.systemsOPto
SneerClub@awful.systems•OpenAI investor falls for GPT's SCP-style babbleEnglish
0·4 months agoThe orange site has a thread. Best sneer so far is this post:
So you know when you’re playing rocket ship in the living room but then your mom calls out “dinner time” and the rocket ship becomes an Amazon cardboard box again? Well this guy is an adult, and he’s playing rocket ship with chatGPT. The only difference is he doesn’t know it and there’s no mommy calling him for dinner time to help him snap out of it.
corbin@awful.systemsto
SneerClub@awful.systems•Are EA billionaire philanthropists actually effective in their 'altruism'? (spoilers: no)English
0·4 months agoThat’s first-order ethics. Some of us have second-order ethics. The philosophical introduction to this is Smilansky’s designer ethics. The wording is fairly odious, but the concept is simple: e.g. Heidegger was a Nazi, and that means that his opinions are suspect even if competently phrased and argued. A common example of this is discounting scientific claims put forth by creationists, intelligent-design proponents, and other apologists; they are arguing with a bias and it is fair to examine that bias.
corbin@awful.systemsto
SneerClub@awful.systems•Are EA billionaire philanthropists actually effective in their 'altruism'? (spoilers: no)English
0·4 months agoYou now have to argue that oxidative stress isn’t suffering. Biology does not allow for humans to divide the world into the regions where suffering can be experienced and regions where it is absent. (The other branch contradicts the lived experience of anybody who has actually raised a sourdough starter; it is a living thing which requires food, water, and other care to remain homeostatic, and which changes in flavor due to environmental stress.)
Worse, your framing fails to meet one of the oldest objections to Singer’s position, one which I still consider a knockout: you aren’t going to convince the cats to stop eating intelligent mammals, and evidence suggests that cats suffer when force-fed a vegan diet.
When you come to Debate Club, make sure that your arguments are actually well-lubed and won’t squeak when you swing them. You’ve tried to clumsily replay Singer’s arguments without understanding their issues and how rhetoric has evolved since then. I would suggest watching some old George Carlin reruns; the man was a powerhouse of rhetoric.
corbin@awful.systemsto
SneerClub@awful.systems•Are EA billionaire philanthropists actually effective in their 'altruism'? (spoilers: no)English
0·4 months agoLet’s do veganism now. I’m allowed to do this because I still remember what lentil burgers taste like from when I dated a vegan at university. So, as with most vegans, Singer is blocked by the classical counting paradoxes from declaring that a certain number of eukaryotic cells makes something morally inedible, and the standard list of counterexamples works just fine for him. Also, I hear he eats shellfish, and geoducks are bigger than e.g. chicks or kittens (or whatever else we might not want to eat.) I don’t know how he’d convince me that a SCOBY is fundamentally not deserving of the same moral insight either; I think we just do it by convention to avoid the cosmic horror of thinking how many yeast cells must die to make a loaf of bread, and most practicing vegans aren’t even willing to pray for all the bugs that they accidentally squish.
I agree with everything else he puts forward, but it boils down to buying organic-farmed food and discouraging factory farming. Singer is heavy on sentiment but painfully light on biology.
corbin@awful.systemsto
SneerClub@awful.systems•Are EA billionaire philanthropists actually effective in their 'altruism'? (spoilers: no)English
0·4 months agoSinger’s original EA argument, concerning the Bengal famine, has two massive holes in the argument, one of which survives to his simplified setup. I’m going to explain because it’s funny; I’m not sure if you’ve been banned yet.
First, in the simplified setup, Singer says: there is a child drowning in the river! You must jump into the river, ruining your clothes, or else the child will drown. Further, there’s no time for debate; if you waste time talking, then you forfeit the child. My response is to grab Singer by the belt buckle and collar and throw him into the river, and then strip down and save the child, ignoring whatever happens to Singer. My reasoning is that I don’t like epistemic muggers and I will make choices that punish them in order to dissuade them from approaching me, but I’ll still save the child afterwards. In terms of real life, it was a good call to prosecute SBF regardless of any good he may have done.
Second, in the Bangladesh setup, Singer says: everybody must donate to one specific charity because the charity can always turn more donations into more delivered food. Accepting the second part, there’s a self-reference issue in the second part: if one is an employee of the charity, do they also have to donate? If we do the case analysis and discard the paradoxical cases, we are left with the repugnant conclusion: everybody ought to not just donate their money to the charity, but also all of their labor, at the cheapest prices possible while not starving themselves. Maybe I’m too much of a communist, but I’d rather just put rich peoples’ heads on pikes instead and issue a food guarantee.
It’s worth remembering that the actual famine was mostly a combination of failures of local government and also the USA withholding food due to Bangladesh trading with Cuba; maybe Singer’s hand-wringing over the donation strategies of wealthy white moderates is misplaced.
corbin@awful.systemsto
TechTakes@awful.systems•Meta beats Kadrey, AI training was fair use — what this meansEnglish
1·5 months agoRead carefully. On p1-2, the judge makes it clear that “the incentive for human beings to create artistic and scientific works” is “the ability of copyright holders to make money from their works,” to the law, there isn’t any other reason to publish art. This is why I’m so dour on copyright, folks; it’s not for you who love to make art and prize it for its cultural impact and expressive power, but for folks who want to trade art for money.
On p3, a contrast appears between Chhabria and Alsup (yes, that Alsup); the latter knows what a computer is and how to program it, and this makes him less respectful of copyright overall. Chhabria doesn’t really hide that they think Meta didn’t earn their summary judgement, presumably because they disagree with Alsup about whether this is a “competitive or creative displacement.” That’s fair given the central pillar of the decision on p4:
Llama is not capable of generating enough text from the plantiffs’ books to matter, and the plaintiffs are not entitled to the market for licensing their works as AI training data.
An analogy might make this clearer. Suppose a transient person on a street corner is babbling. Occasionally they spout what sounds like a quote from a Star Wars film. Intrigued, we prompt the transient to recite the entirety of Star Wars, and they proceed to mostly recreate the original film, complete with sound effects and voice acting, only getting a few details wrong. Does it matter whether the transient paid to watch the original film (as opposed to somebody else paying the fee)? No, their recreation might be candid and yet not faithful enough to infringe. Is Lucas entitled to a licensing fee for every time the transient happens to learn something about Star Wars? Eh, not yet, but Disney’s working on it. This is why everybody is so concerned about whether the material was pirated, regardless of how it was paid for; they want to say that what’s disallowed is not the babbling on the street but the access to the copyrighted material itself.
Almost every technical claim on p8-9 is simplified to the point of incorrectness. They are talking points about Transformers turned into aphorisms and then axioms. The wrongest claim is on p9, that “to be able to generate a wide range of text … an LLM’s training data set must be large and diverse” (it need only be diverse, not large) followed by the claim that an LLM’s “memory” must be trained on books or equivalent “especially valuable training data” in order to “work with larger amounts of text at once” (conflating hyperparameters with learned parameters.) These claims show how the judge fails to actually engage with the technical details and thus paints with a broad brush dipped in the wrong color.
On p12, the technical wrongness overflows. Any language model can be forced to replicate a copyrighted work, or to avoid replication, by sampling techniques; this is why perplexity is so important as a metric. What would have genuinely been interesting is whether Llama is low-perplexity on the copyrighted works, not the rate of exact replications, since that’s the key to getting Llama to produce unlimited Harry Potter slash or whatever.
On p17 the judge ought to read up on how Shannon and Markov initially figured out information theory. LLMs read like Shannon’s model, and in that sense they’re just like humans: left to right, top to bottom, chunking characters into words, predicting shapes and punctuation. Pretending otherwise is powdered-wig sophistry or perhaps robophobia.
On p23 Meta cites fuckin’ Sega v. Accolade! This is how I know y’all don’t read the opinions; you’d be hyped too. I want to see them cite Galoob next. For those of you who don’t remember the 90s, the NES and Genesis were video game consoles, and these cases established our right to emulate them and write our own games for them.
p28-36 is the judge giving free legal advice. I find their line of argumentation tenuous. Consider Minions; Minions are bad, Minions are generic, and Minions can be used to crank out infinite amounts of slop. But, as established at the top, whoever owns Minions has the right to profit from Minions, and that is the lone incentive by which they go to market. However, Minions are arbitrary; there’s no reason why they should do well in the market, given how generic and bad they are. So if we accept their argument then copyright becomes an excuse for arbitrary winners to extract rent from cultural artifacts. For a serious example, look up the ironic commercialization of the Monopoly brand.
corbin@awful.systemsto
TechTakes@awful.systems•Anthropic AI wins broad fair use for training! — but not on pirated booksEnglish
1·5 months agoTop-level commenters would do well to read Authors Guild v Google, two decades ago. They’re also invited to rend their garments and gnash their teeth at Google, if they like.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 29th June 2025English
20·5 months agoLast Week Tonight’s rant of the week is about AI slop. A Youtube video is available here. Their presentation is sufficiently down-to-earth to be sharable with parents and extended family, focusing on fake viral videos spreading via Facebook, Instagram, and Pinterest; and dissecting several examples of slop in order to help inoculate the audience.
corbin@awful.systemsto
TechTakes@awful.systems•Disney sues AI image generator MidjourneyEnglish
23·5 months agoWhat a deeply dishonorable lawsuit. The complaint is essentially that Disney and Universal deserve to be big powerful movie studios that employ and systematically disenfranchise “millions of” artists (p8).
Disney claims authorship over Darth Vader (Lucas) and Yoda (Oz), Elsa and Ariel (Andersen), folk characters Aladdin, Mulan, and Snow White; Lightning McQueen & Buzz Lightyear (Lasseter et al), Sully (Gerson & Stanton), Iron Man (Lee, Kirby, et al), and Homer Simpson (Groening). Disney not only did not design or produce any of these characters, but Disney purchased those rights. I will give Universal partial credit for not claiming to invent any of their infamous movie monsters, but they do claim to have created Shrek (Stieg). Still, this is some original-character-do-not-steal snottiness; these avaricious executives and attorneys appropriated art from artists and are claiming it as their own so that they can sue another appropriator.
Here is a sample of their attitude, p16 of the original complaint:
Disney’s copyright registrations for the entertainment properties in The Simpsons franchise encompass the central characters within.
See, they’re the original creator and designated benefactor, because they have Piece of Paper, signed by Government Authority, and therefore they are Owner. Who the fuck are Matt Groening or Tracey Ullman?
I will not contest Universal’s claim to Minions.
One weakness of the claim is that it’s not clear whether Midjourney infringes, Midjourney’s subscribers infringe, or Midjourney infringes when collaborating with its subscribers. It seems like they’re going to argue that Midjourney commits the infringing act, although p104 contains hedges that will allow Disney to argue either way. Another weakness is the insistence that Midjourney could filter infringing queries, but chooses not to; this is a standard part of amplifying damages in copyright claims but might not stand up under scrutiny since Midjourney can argue that it’s hard to e.g. tell the difference between infringing queries and parodic or satirical queries which infringe but are permitted by fair use. On the other hand, this lawsuit could be an attempt to open a new front in Disney’s long-standing attempt to eradicate fair use.
As usual, I’m not defending Midjourney, who I think stand on their own demerits. But I’m not ever going to suck Disney dick given what they’ve done to the animation community. I wish y’all would realize the folly of copyright already.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 1st June 2025English
36·6 months agoI’m gonna be polite, but your position is deeply sneerworthy; I don’t really respect folks who don’t read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:
There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism.” … “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”
At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:
In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. “It will mean that we trust these things more, share more data with them and be more open to persuasion.” But the greater risk from the illusion of consciousness is a “moral corrosion”, he says. “It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.
A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it’s a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I’ll try to salvage your position:
Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it’s definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025English
6·6 months agoOh, sorry. We’re in agreement and my sentence was poorly constructed. The computation of a matrix multiplication usually requires at least pencil and paper, if not a computer. I can’t compute anything larger than a 2 × 2. But I’ll readily concede that Strassen’s specific trick is simple enough that a mentalist could use it.
corbin@awful.systemsto
TechTakes@awful.systems•You can’t feed generative AI on ‘bad’ data then filter it for only ‘good’ dataEnglish
7·6 months agoOnly the word “theoretical” is outdated. The Beeping Busy Beaver problem is hard even with a Halting oracle, and we have a corresponding Beeping Busy Beaver Game.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025English
61·6 months agoYour understanding is correct. It’s worth knowing that the matrix-multiplication exponent actually controls multiple different algorithms. I stubbed a little list a while ago; important examples include several graph-theory algorithms as well as parsing for context-free languages. There’s also a variant of P vs NP for this specific problem, because we can verify that a matrix is a product in quadratic time.
That Reddit discussion contains mostly idiots, though. We expect an iterative sequence of ever-more-complicated algorithms with ever-slightly-better exponents, approaching quadratic time in the infinite limit. We also expected a computer to be required to compute those iterates at some point; personally I think Strassen’s approach only barely fits inside a brain and the larger approaches can’t be managed by humans alone.
corbin@awful.systemsto
TechTakes@awful.systems•You can’t feed generative AI on ‘bad’ data then filter it for only ‘good’ dataEnglish
23·6 months agoTo be fair, I’m skeptical of the idea that humans have minds or perform cognition outside of what’s known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn’t mean humans are good.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025English
2·6 months agoRead it to the end and then re-read 2009’s The Gervais Principle. I hope Ed eventually comes back to Rao’s rant because they complement each other perfectly; Zitron’s Business Idiot is Rao’s Clueless! What Rao brings to the table is an understanding that Sociopaths exist and steer the Clueless, and also that the ratio of (visible) Clueless to Sociopaths is an indication of the overall health of an (individual) business; Zitron’s argument is then that we are currently in an environment (the “Rot Economy” in his writing) which is characterized by mostly Clueless business leaders.
Then re-read Doctorow’s 2022 rant Social Quitting, which introduced “enshittification”, an alternate understanding of Rao’s process. To Rao, a business pivots from Sociopath to Clueless leadership by mere dilution, but for Doctorow, there’s a directed market pressure which eliminates (or M&As) any businesses not willing to give up some Sociopathy in favor of the more generally-accepted Clueless principles. Concretely relevant to this audience, note how Sociopathic approaches to cryptocurrency-oriented banking have failed against Clueless GAAP accounting, not just at the regulatory level but at the level of handshakes between small-business CEOs.
Somebody could start a new flavor of Marxism here, one which (to quote an old toot of mine @corbin@defcon.social that I can’t find) starts by understanding that management is a failed paradigm of production and that quotes all of these various managers (Galloway, Rao, and Zitron were all management bros at one point, as were their heroes Scott Adams and Mike Judge) as having a modicum of insight cloaked in MBA-speak.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 18th May 2025English
8·6 months agoTrying to remember who said it, but there’s a Mastodon thread somewhere that said it should be called Theocracy. The introduction would talk about the quiverfull movement, the Costco would become a megachurch (“Welcome to church. Jesus loves you.”), etc. It sounds straightforward and depressing.
corbin@awful.systemsto
TechTakes@awful.systems•If AI is so good at coding … where are the open source contributions?English
8·6 months agoYou may be thinking of checkers. Chess is still open and unsolved, although there is strong evidence that the player who goes first has a large advantage.
corbin@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 18th May 2025English
7·6 months agoI adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.
Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.
This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.









Yeah, that’s the most surprising part of the situation: not only are the SCP-8xxx series finding an appropriate meta by discussing the need to clean up SCP articles under ever-increasing pressure, but all of the precautions revolving around SCP-055 and SCP-914 turned out to be fully justified given what the techbros are trying to summon. It is no coincidence that the linked thread is by the guy who wrote SCP-3125, whose moral is roughly to not use blueprints from five-dimensional machine elves to create memetic hate machines.