It’s not always easy to distinguish between existentialism and a bad mood.

  • 14 Posts
  • 334 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle



  • Here’s a screenshot of a skeet of a screenshot of a tweet featuring an unusually shit take on WW2 by Moldbug:

    link

    transcript

    skeet by Joe Stieb: Another tweet that should have ended with the first sentence.

    Also, I guess I’m a “World War Two enjoyer”

    tweet by Curtis Yarvin: There is very very extensive evidence of the Holocaust.

    Unfortunately for WW2 enjoyers, the US and England did not go to war to stop the Holocaust. They went to war to stop the Axis plan for world conquest.

    There is no evidence of the Axis plan for world conquest.

    edit: hadn’t seen yarvin’s twitter feed before, that’s one high octane shit show.









  • The first prompt programming libraries start to develop, along with the first bureaucracies.

    I went three layers deep in his references and his references’ references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

    It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.

    gwern wrote:

    I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).







  • It’s pick-me objectivism, only more overtly culty the closer you are to it irl. Imagine scientology if it was organized around AI doomerism and naive utilitarianism while posing as a get-smart-quick scheme.

    It’s main function (besides getting the early adopters laid) is to provide court philosophers for the technofeudalist billionaire class, while grooming talented young techies into a wide variety of extremist thought both old and new, mostly by fostering contempt of established epistemological authority in the same way Qanons insist people do their own research, i.e. as a euphemism for only paying attention to ingroup approved influencers.

    It seems to have both a sexual harassment and a suicide problem, with a lot of irresponsible scientific racism and drug abuse in the mix.