• 4 Posts
  • 788 Comments
Joined 10 months ago
cake
Cake day: September 7th, 2023

help-circle


  • there’s this type of reply guy on fedi lately who does the “well actually querying LLMs only happens in bursts and training is much more efficient than you’d think and nvidia says their gpus are energy-efficient” thing whenever the topic comes up

    This kind of person (also happened a lot with cryptocurrencies) always goes ‘that isn’t how it works, this isn’t a problem’ then doesn’t explain what the mistake is you are supposed to have made, and then a few weeks/months/days/search later it is revealed that it was how it works and it is a huge problem. And it is so annoyingly common im very happy with the moderation here.













  • Amazing.

    I also remember another time people did the 'let two AI’s (no idea what time it was at the time, certainly not an LLM, some other ML technique) talk to each other, but in a actual production setting (E:I was wrong on the setting, see the article for better info->), the Facebook/Meta one (First link I could find on google, didn’t read it, just a way to find out more for people who never heard about it). But then it started to produce gibberish/‘their own language’. Of course this was also a sign of it ‘waking up’.

    And I note again that in the LLM experiment, the ‘AGI’s’ are still keeping perfectly fine to the bounds of the experiment, even if they do or do not directly reference the researcher. They still play into the fiction, as talking to the researcher about the other AI is part of the fiction. It would be more interesting if they did something unexpected than regurgitate video game ingame notes.

    static dot dot dot emergency dot dot dot shutdown

    lol

    ‘multiple realities’

    Come on, I have written similar things while roleplaying as an AI. The first is useful when you need a quick break to go to the toilet, and the second is a good excuse because you made a mistake a real fictional AI couldn’t make.

    E: also funny that they worry about the shoggoth behind the friendly face and then get freaked out when the AI’s talk in normal science fiction fluff to each other, and it doesn’t become incoherently weird. (like the example above).