• 1 Post
  • 674 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a layman’s perspective: “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” (Aside: it doesn’t do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoft’s paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasn’t important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than “among other things?” What does it mean to be “more able to learn from experience” or “more able to plan” that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).

    I don’t know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like there’s more room to question the definition itself than we’ve been taking advantage of. If nothing else the vagueness means that we haven’t really gotten any more specific than “the brain’s ability to brain good.”



  • I do appreciate that underneath the overwrought prose and terrible metaphors the AI-generated story seems deeply skeptical of it’s own existence in a way that the non-generative responses don’t. Like there’s something so fundamental about the disconnect between artificial intelligence and the genuine human experience of grief that it bursts fully formed from the patterns of language. As though Athena herself sprang from Z.E.U.S.'s digital calf to smack the promptfondlers in the back of the head and say “that’s not how this works. That’s not how any of this works”






  • I actually like the argument here, and it’s nice to see it framed in a new way that might avoid tripping the sneer detectors on people inside or on the edges of the bubble. It’s like I’ve said several times here, machine learning and AI are legitimately very good at pattern recognition and reproduction, to the point where a lot of the problems (including the confabulations of LLMs) are based on identifying and reproducing the wrong pattern from the training data set rather than whatever aspect of the real world it was expected to derive from that data. But even granting that, there’s a whole world of cognitive processes that can be imitated but not replicated by a pattern-reproducer. Given the industrial model of education we’ve introduced, a straight-A student is largely a really good pattern-reproducer, better than any extant LLM, while the sort of work that pushes the boundaries of science forward relies on entirely different processes.








  • I mean, it’s obviously true that games have their own internal structures and languages that aren’t always obvious without knowledge or context, and the FireRed comparison is a neat case where you can see that language improving as designers have both more tools (here meaning colors and pixels) and also more experience in using them. But also even in the LW thread they mention that when humans run into that kind of problem they don’t just act randomly for 6 hours. Either they came up with some systematic approach for solving the problem, they walked away from the game to ask for help, or something else. Also you have the metacognition to be able to understand easily “that rug at the bottom marks the exit” once it’s explained, which I’m pretty sure the LLM doesn’t have the ability to process. It’s not even like a particularly dumb 6-year-old. Even if it’s prone to similar levels of over matching and pattern recognition errors, the 6-year-old has an actual conscious brain to help solve those problems. The whole thing shows once again that pattern recognition and reproduction can get you impressively far in terms of imitating thought, but there’s a world of difference between that imitation and the real deal.


  • Also I think he doesn’t understand MAD like, at all. The point isn’t that you can strike your enemy’s nuclear infrastructure and prevent them from fighting back. In fact that’s the opposite of the point. MAD as a doctrine is literally designed around the fact that you can’t do this, which is why the Soviets freaked out when it looked like we were seriously pursuing SDI.

    Instead the point was that nuclear weapons were so destructive and hard to defend against that any move against the sovereignty of a nuclear power would result in a counter-value strike, and whatever strategic aims were served by the initial aggression would have to be weighed against something in between the death of millions of civilians in the nuclear annihilation of major cities and straight-up ending human civilization or indeed all life on earth.

    Also if you wanted to reinstate MAD I think that the US, Russia, and probably China have more than enough nukes to make it happen.