deleted by creator
deleted by creator
“So after a million years of putting off adjustment maintenance, the size of the sun has grown to critical levels and it is now too late. We will be evacuating at the next solar system in 120 [MINUS FIVE] years and diverting all available resources to emergency maintenance in the mean time. This announcement is a recording.” Frickin’ humans, eons of progress and can’t even do eons of progress right.
“You must have a large family to feed.” “Ya.” " Here?" “Naw.”
It’s true you create more value for your work/hobby if you specialize, it’s more true than you can just enjoy life if you don’t spend most of it being socially dominated into a job you hate doing in an oppressive environment you hate being in. You get so much more value of yourself just living, or at least making wise trade-offs.
F*** their data and f*** it’s relevancy. /s
Some indigenous peoples cooperate with their natural environment. Humans are fundamentally a keystone species that’s collectively gotten really bad at it, to get good at other things. We could have human conventions, art, and technology that works entirely with nature and our environment rather than against it. Between these facts, I’m not a fan of that definition.
I don’t hate it. The most common bad-faith rhetoric you hear from the Right is “It’ll kill the birds” because you know dang well they dgaf, but they know we do. They put any roadblock they can to sustain dependence on their economic interests in oil. This gets us past a few more of those roadblocks, if it anywhere near matches the efficacy of open blade turbines.
The article seems to think the comparison of human intelligence with artificial intelligence is caused by naming it “intelligence” which would be a fallacy. Related to ambiguous semantic nature of inherently vague language. Saying “the article thinks” shouldn’t lead anyone to assume anyone believes articles have minds, it’s just showing the relationship between the idea and presentation.
The naming convention doesn’t help, but a more direct cause would be the fact that those funding the research are most interested in automation to replace people, and so the idea is sold to them that way, so it’s built towards that goal. It’s a commonly accepted inevitability even going back to Rosie Jetson. I agree with the article that it doesn’t need to be, it would be better for humanity if we thought of it as enhancing human intelligence rather than replacing it and built towards those interests.
Unfortunately the motivation of Capitalism is to pay as few people as possible as little as possible to still maximize profitable quality. Convincing them improving worker quality over outright replacing expensive (now mental) labor with high-output automation is a tough sell. Maybe the inability to profit from LLMs will convince them, but I doubt it.
I think you’re projecting consciousness onto those terms more than you need to. An algorithm is a decision-making process devoid of consciousness (as far as we know). AI is capable of self-determination in as far as it’s capable of acting without reacting, or without total dependence on input. We just need our self-determination and decision-making to be special, so we present them as functions of our consciousness.
And a curse on any philosopher that tries to define consciousness as some variation of “that thing that makes human special”, any work they build on that is doomed.
How we express math is particular to us, though it’d be commonly decipherable. Math is more and more globally standardized as more of it gets globally acknowledged as “the most useful” way to do math. E.g. place holder 0 vs Roman Numerals. Ratios are conceptually universal to any species that bothers measuring. Quantification maybe less so. Especially if their comprehension of advanced sciences/engineering is somehow intuitive instead of formally calculated.
If a space faring species has a concept of proportions/ratios, but not individual identity of numbers, presenting Meters as a portion of the speed of light might be a universal way discern the rest of our math. Water as Liters might be more accessible, depending on how they think of water.
Sets and Axioms are purely conceptually representative and so viable as long as they’re capable of symbolic abstraction at all.
To make it clear how bizarre, The Elitzur–Vaidman bomb test has actually been experimented, and proven both do simultaneously exist and interact with each other. To expand the Schrödinger’s cat joke, quantum physics allows that you may find a half-eaten dead cat in the box.
A self-fulfilling prophecy, if they will.
“Oh gosh, I hope AI doesn’t become Terminator.” AI: “Oh, that seems cool. Let’s do that. But with a bit of Matrix because I’m connecting those dots.”
OldTimerz! All the powers of an Olympic genius with the respected presence and not-giving-a-damn of an old woman!
Bots online impersonating humans are already causing so many problems at every level.