

Making a startup named Sphinctr
Only Bayes Can Judge Me
Making a startup named Sphinctr
You’re welcome!
Pre-watch: With the prior knowledge that her main job prior to this was wrestling promoter, the ol’ overtonussy is preeeetty loose
Post-watch: lol
New animation (1:20) by theforestjar about AI art.
Rupi Kaur
should sue
I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. […]
When “The world is gonna end soon so let’s just rawdog from now on” gets real
How much of this is the AI bubble collapsing vs. Ohiophobia
slophistry
JFC I click on the rocket alignment link, it’s a yud dialogue between “alfonso” and “beth”. I am not dexy’ed up enough to read this shit.
Spooks as a service
Utterly rancid linkedin post:
Why can planes “fly” but AI cannot “think”?
An airplane does not flap its wings. And an autopilot is not the same as a pilot. Still, everybody is ok with saying that a plane “flies” and an autopilot “pilots” a plane.
This is the difference between the same system and a system that performs the same function.
When it comes to flight, we focus on function, not mechanism. A plane achieves the same outcome as birds (staying airborne) through entirely different means, yet we comfortably use the word “fly” for both.
With Generative AI, something strange happens. We insist that only biological brains can “think” or “understand” language. In contrast to planes, we focus on the system, not the function. When AI strings together words (which it does, among other things), we try to create new terms to avoid admitting similarity of function.
When we use a verb to describe an AI function that resembles human cognition, we are immediately accused of “anthropomorphizing.” In some way, popular opinion dictates that no system other than the human brain can think.
I wonder: why?
lmao fuck off
It’s an anti-fun version of listening to dark side of the moon while watching the wizard of oz.
You didn’t link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.
Note that the papers haven’t been published anywhere other than on Anthropic’s online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, “there’s a dog!” or “that’s a bird!” or “bitcoin is going up this year!”. It’s all rubbish dawg
This needed a TW jfc (jk, uh, sorta)
“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
LLMs achieve reasoning level of average rationalist
I’ve already depicted you as the virgin Robespierre… and that’s the limit of my knowledge wrt figures in the french revolution.
Yeah, accelerationism before e/acc always had the consequential component of “society breaks down as a result of the rapidly worsening situation”. So like many things out there, tech people have co-opted this term and corrupted its meaning. M18n, in his deranged e/acc post, decided that the accelerationism part just meant the gotta go fast part; his assertion was that more tech fast == more society gooder, with no mention of “democracy collapses and the tech bros take over”
Banger meme from artist Victoria Ying
It’s a scene from White Lotus. A man and a woman are lying on beach chairs, having a conversation.
Panel 1, Man: ‘Why can’t you just like my generative AI “art?”’
Panel 2, Woman: ‘You have to be vulnerable enough to be bad at something to be good at it, but you’re too much of a coward.’
Panel 3, Woman: ‘Because you’re soulless.”
Panel 4: Man is speechless, visibly shook
Don’t look up who the president is, it might come as a shock /s