• 2 Posts
  • 1.74K Comments
Joined 1 year ago
cake
Cake day: August 21st, 2023

help-circle

















  • I have an idea for a project that requires a suppliment to my utterly inadequate creative writing skills, and I have had abysmal luck finding a co-author. I don’t want to use the LLMs available online because I have learned not to rely on a tool that’s could disappear without notice. The part about it being potentially illegal was a joke and nothing more.

    Have you considered that you can’t tell what someone does or doesn’t understand by a comment?

    That’s entirely fair. I’m annoyed today, and the reply about the wrench just made it worse. My apologies.


  • To imagine the threat posed by AI, consider a picture of the Milky way, and a picture of the Milky Way labeled as 10 years later than the first. The second picture has a hole in it 10 light years in radius, centered on the earth.

    We need to know how to deal with a potentially rogue AI before it exists, because a rogue AI can win on the time scale of seconds, before anyone knows it’s a threat.

    The inefficiency of the system isn’t relevant to the discussion.

    How far away the threat is is irrelevant to the discussion.

    The limits of contemporary generative neural networks is irrelevant to the discussion.

    The problems of copyright, and job displacement are irrelevant to the discussion.

    The abuses of capitalism, while important, are not relevant to the discussion. If your response to this news is “We just need to remove capitalism” dunk your head is a bucket of ice water and keep it there until you either realize you’re wrong or can explain how capitalism is relevant to a grey goo scenario.

    I was worried about the current problems with AI (everyone losing their jobs) a decade ago, and everyone thought I was stupid for worrying about it. Now we’re here, and it’s possibly too late to stop it. Today, I am worried about AI destroying the entire universe. Hint: forbidding their development, on any level, isn’t going to work.

    Things to look up: paperclip maximizer, AI safety, Eleizer Yudkowsky, Robert Miles, Transhumanism, outcome pump, several other things that I can’t remember and don’t have the time to look up.

    I’m sure this will get downvoted, oh well. Guess I’ll die.