• 6 Posts
  • 199 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • I’ll gladly endorse most of what the author is saying.

    This isn’t really a debate club, and I’m not really trying to change your mind. I will just end on a note that:

    I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.

    Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. “Computable” in the essay means something specific.

    Simulation != Simulacrum.

    And because I can’t resist, I’ll just clarify that when I said:

    Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

    It means that the test does (or can possibly) exist that, it’s just not achievable by humans. [Although I will also note that for methods that don’t rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


  • Even if true, why couldn’t the electrochemical processes be simulated too?

    • You’re missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.
    • Even if you simulate biological processes perfectly, no actual biology occurs.
    • The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

    But even if it is, it’s “just” a matter of scale.

    • Fundamentally what the author is saying, is that it’s a difference in kind not a difference in quantity.
    • Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation).
    • Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

    I do know how to write a program that produces indistinguishable results from a real coin for a simulation.

    • Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.
    • Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.
    • Also decent random-number generation is not actually properly speaking Turing complete [Unless again you simulate physics but then again, you have to properly choose random starting conditions even if you assume you have a capable simulator] , modern computers use stuff like component temperature/execution time/user interaction to add “entropy” to random number generation, not direct computation.

    As a summary,

    • When reducing any problem for a “simpler” one, you have to be careful what you ignore.
    • The simulation argument is a bit irrelevant, but as a small aside not guaranteed to be possible in principle, and certainly untractable with current physics model/technology.
    • Human intelligence has a lot of externalities and cannot be reduced to pure “functional objects”.
      • If it’s just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you’ll agree those aren’t intelligent. The output as meaning to you, but it doesn’t have meaning for the tape-recorder.

  • That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.

    Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.

    I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it’s not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can’t, or the subtelties of language completely reverse the meaning when not handled with the proper care).




  • Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


    EDIT:

    To clarify I and maybe put it in terms which are closer to your interpretation.

    With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

    With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it’s where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There’s nothing below the surface,

    I don’t personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it’s more useful to frame it as respecting other people’s boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

    And I especially don’t think it healthy to essentially be saying: “have the same expectations of chatbots and service workers”.


    TLDR:

    You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it’s not just about protecting yourself, it’s also about protecting them.

    You should never catch feelings for a chatbot, because they don’t have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).





  • A glorious snippet:

    The movement connected to attracted the attention of the founder culture of Silicon Valley and leading to many shared cultural shibboleths and obsessions, especially optimism about the ability of intelligent capitalists and technocrats to create widespread prosperity.

    At first I was confused at what kind of moron would try using shibboleth positively, but it turns it’s just terribly misquoting a citation:

    Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.

    Also lol at insiting on “exonym” as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn’t really even make sense to use the acronym unless you’re doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)

    It’s a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.






  • Hard disagree, as much as I loathe JK Rowling’s politcal ideas, and the at-times unecessary cruelty found in the HP novels, it still shaped a large part of the imaginary world of a generation. As beautiful as bird songs are (who the hell refers to birdsong as “output”), this simply cannot be compared.

    Yes commercial for-profit shareholder-driven lackadaisical “art” is already an insult to life and creativity, but a fully-or-mostly automated slop machine is an infinitely worse one.

    Even in the sloppiest of arts I have watched, the humanity still shines through, people still made choice, even subjected to crazy uninispired didacts from above, the hands that fashion books, movies, music, video-games, tv-shows still have—must have—room to bring a given vision together.

    I think people DO care.

    I don’t know exactly what you wanted to say, if you wanted to express despair, cynisism, nihilishm or something else, but I would encourage you not to give up hope with humanity, people aren’t that stupid, people aren’t that void of meaning.