• 6 Posts
  • 210 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • even assuming sufficient computation power, storage space, and knowledge of physics and neurology

    but sufficiently detailed simulation is something we have no reason to think is impossible.

    So, I actually agree broadly with you in the abstract principle but I’ve increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct…

    • We don’t have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and “moore’s law” (scare quotes deliberate) has been slowing down such that I don’t think we’ll get there.

    • A simulation from the physics level up is even more out of reach in terms of computational power required.

    As you say:

    I think there would be other, more efficient means well before we get to that point

    We really really don’t have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won’t be able to do it that much more “efficiently” in the first place…

    Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know



  • So one point I have to disagree with.

    More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

    There are a lot of ways to try to quantify the human brain’s computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn’t literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I’ve seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me… the eyeball’s microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn’t captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

    So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.





  • Very ‘ideological turing test’ failure levels.

    Yeah, his rational is something something “threats” something something “decision theory”, which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors “threats” (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.

    he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.

    He’s been like this a while, basically assuming that obviously animals don’t have qualia and obviously you are stupid and don’t understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.


  • I haven’t looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.

    And it doesn’t occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn’t be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army’s group home, and a few other sketchy group living situations (Nonlinear comes to mind)).

    Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m

    Funny sneer in the replies:

    I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky’s BDSM practices

    Serious sneer in the replies

    this seems like a good time to point folks towards my articles titled “That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW” and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13


  • Elon is widely known to be a strong engineer, as well as a strong designer

    This is just so idiotic I don’t know what made up world Habryka lives in. In between blowing up a launch pad, the numerous insane design and engineering choices of the cybertruck, all the animals slaughtered by neuralink, and the outages and technical problems of twitter, you might be tempted to hope that the idea of Elon Musk as a strong engineer or designer would be firmly relegated to the dustbins of early 2010 where out-of-the-loop people could manage to buy the image of his PR firms. I guess Musk-cultist and lesswrong have more overlap than I realized (I knew there was some, but I didn’t realize it was that common).


  • This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly;

    This. On Reddit (which isn’t actually mainstream common knowledge per se, but I still find it encouraging and indicative that the common sense perspective is winning out) whenever I see the topic of lesswrong or AI Doom come up on unrelated subreddits, I’ll see a bunch of top upvoted comments mentioning the cult spin offs or that the main thinker’s biggest achievement is Harry Potter fanfic or Roko’s Basilisk or any of the other easily comprehensible indicators that these are not serious thinkers with legitimate thoughts.


  • “You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey.

    I wonder exactly when this was. The attempted oust of Sam Altman was November 17, 2023. So either this warning was timely (but something Sam already had the pieces in place to make a counterplay against), or a bit too late (as Sam had recently just beaten an attempt by the true believers to oust him).

    Sam Altman has proved adept at keeping the plates spinning and wheedling his way through various deals, I agree with the common sentiment here that he his underlying product just doesn’t work well enough, in a unique/proprietary enough way for him to actually use that to get profitable company. Pivot-to-AI and Ed Zitron have a guess of 2027 for the plates to come crashing down, but with an IPO on the way to infuse more cash into OpenAI I wouldn’t be that surprised if he delays the bubble pop all the way to 2030, and personally gets away cleanly with no legal liability for it and some stock sales lining his pockets.







  • Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you’re right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn’t be misinterpreted as general skepticism towards all AI… in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it’s just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

    Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

    Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he’ll even start to be “charitable” to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I’m not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won’t actually learn much from him.


  • Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

    So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won’t actually get you good working code.

    AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don’t say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

    I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.




  • Following up because the talk page keeps providing good material…

    Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven’t seen people try to weaponize the rules to push their views many times before.

    Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can’t win with some people…

    Looking back on the original lesswrong brigade organizing discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

    I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

    Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms…

    Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for “access to ground truth”. I guess even lesswrong knows that is bullshit.