overactors trying to out-overact each other. Love it!
- 0 Posts
- 1.23K Comments
vrighter@discuss.tchncs.deto Games@lemmy.world•What's an absolutely medium quality game? Not great, incredible or terrible or any single ended extreme. Dead medium qualityEnglish3·8 小时前i still enjoyed the crap out of it. Sometimes zoning out and just running around collecting stuff is just what I need.
vrighter@discuss.tchncs.deto Privacy@programming.dev•Meta Found a New Way to Track Android Users Covertly via Facebook & Instagram3·9 小时前localhost is “this device”.
connecting to localhost means connecting to something running on the same machine.
Browsers generally block connections to other domains (ex if you’re on google.com, the browser won’t simply let the site contact amazon.com willy-nilly).
But localhost is your own machine, so it is usually “trusted”. Facebook exploited this fact to exfiltrate data from the browser to the other apps running on your own phone, which would, in turn be free to do with it as they please, because they’re not the browser
vrighter@discuss.tchncs.deto Games@lemmy.world•A game you "didn't know it was bad 'til people told you so"?English6·9 小时前he was forced to release it quickly to coincide with the film’s release. For comparison, it used to take a team of devs a couple of months to make a game. He had 6 weeks.
Also, if you read the manual, this essentially never happened to you. It was easy to avoid.
You also needed to read the manual. The game did stuff that other games at the time didn’t, for example, a contextual button. You couldn’t know what would happen unless you read the manual to learn what the icons meant. A lot of people never did and so decided that the game was bad.
I don’t see a ball in any of the nets. So there are zero goals in that image
vrighter@discuss.tchncs.deto Games@lemmy.world•A game you "didn't know it was bad 'til people told you so"?English16·10 小时前when climbing out of the pit, it was very easy to immediately fall back down (due to the pixel-perfect collision detection).
And here is an excerpt from the manual: “Even experienced extraterrestrials sometimes have difficulty levitating out of wells. Start to levitate E.T. by first pressing the controller button and then pushing your Joystick forward. E.T.'s neck will stretch as he rises to the top of the well (see E.T. levitating in Figure 1). Just when he reaches the top of the well and the scene changes to the planet surface (see Figure 2), STOP! Do not try to keep moving up. Instead, move your Joystick right, left, or to the bottom. Do not try to move up, or E.T. might fall back into the well.”
vrighter@discuss.tchncs.deto Games@lemmy.world•A game you "didn't know it was bad 'til people told you so"?English12·13 小时前it was actually way ahead of its time, for a game. One small bug (the workaround for which was in the manual) ruined its reputation. But I genuinely think it was a good game.
Also written in 6 weeks by one guy. Freaking impressive
vrighter@discuss.tchncs.deto Linux@lemmy.ml•Just wanted to show off the lowest end hardware I ever ran Linux on2·24 小时前I was 14 years old, and I got the 128meg stick for free. Beggars can’t be choosers haha
vrighter@discuss.tchncs.deto Linux@lemmy.ml•Just wanted to show off the lowest end hardware I ever ran Linux on2·1 天前i started using linux on a single core pentium 4 with 384M of ram
vrighter@discuss.tchncs.deto linuxmemes@lemmy.world•What's your favourite OS that does not use systemd?497·2 天前because the over 70 different binaries of systemd are “not modular” because they are designed to work together. What makes a monolith is, apparently, the name of the overarching project, not it being a single binary (which again, it’s not)
you don’t check your brain’s file system regularly?
vrighter@discuss.tchncs.deto Technology@lemmy.world•Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.English1·6 天前you wouldn’t be “freezing” anything. Each possible combination of input tokens maps to one output probability distribution. Those values are fixed and they are what they are whether you compute them or not, or when, or how many times.
Now you can either precompute the whole table (theory), or somehow compute each cell value every time you need it (practice). In either case, the resulting function (table lookup vs matrix multiplications) takes in only the context, and produces a probability distribution. And the mapping they generate is the same for all possible inputs. So they are the same function. A function can be implemented in multiple ways, but the implementation is not the function itself. The only difference between the two in this case is the implementation, or more specifically, whether you precompute a table or not. But the function itself is the same.
You are somehow saying that your choice of implementation for that function will somehow change the function. Which means that according to you, if you do precompute (or possibly cache, full precomputation is just an infinite cache size) individual mappings it somehow magically makes some magic happen that gains some deep insight. It does not. We have already established that it is the same function.
vrighter@discuss.tchncs.deto Technology@lemmy.world•Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.English1·6 天前the fact that it is a fixed function, that only depends on the context AND there are a finite number of discrete inputs possible does make it equivalent to a huge, finite table. You really don’t want this to be true. And again, you are describing training. Once training finishes anything you said does not apply anymore and you are left with fixed, unchanging matrices, which in turn means that it is a mathematical function of the context (by the mathematical definition of “function”. stateless, and deterministic) which also has the property that the set of all possible inputs is finite. So the set of possible outputs is also finite and strictly smaller or equal to the size of the set of possible inputs. This makes the actual function that the tokens are passed through CAN be precomputed in full (in theory) making it equivalent to a conventional state transition table.
This is true whether you’d like it to or not. The training process builds a markov chain.
vrighter@discuss.tchncs.deto Technology@lemmy.world•Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.English1·7 天前no, not any computer program is a markov chain. only those that depend only on the current state and ignore prior history. Which fits llms perfectly.
Those sophisticated methods you talk about are just a couple of matrix multiplications. Those matrices are what’s learned. Anything sophisticated happens during training. Inference is so not sophisticated. sjusm mulmiplying some matrices together and taking the rightmost column of the result. That’s it.
vrighter@discuss.tchncs.deto Technology@lemmy.world•Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.English1·7 天前yes you can enumerate all inputs, because thoy are not continuous. You just raise the finite number of different tokens to the finite context size and that’s exactly the size of the table you would need. finite*finite=finite. You are describing training, i.e how the function is geerated. Yes correlations are found there and encoded in a couple of matrices. Those matrices are what are used in the llm and none of what you said applies. Inference is purely a markov chain by definition.
i let the wife do it. She enjoys it, I don’t
vrighter@discuss.tchncs.deto Ask UK@feddit.uk•What's the worst case of enshitification you've seen lately?25·7 天前gestures broadly at everything
vrighter@discuss.tchncs.deto Technology@lemmy.world•Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.English1·7 天前“lacks internal computation” is not part of the definition of markov chains. Only that the output depends only on the current state (the whole context, not just the last token) and no previous history, just like llms do. They do not consider tokens that slid out of the current context, because they are not part of the state anymore.
And it wouldn’t be a cache unless you decide to start invalidating entries, which you could just, not do… it would be a table with token-alphabet-size^context length size, with each entry being a vector of size token_alphabet_size. Because that would be too big to realistically store, we do not precompute the whole thing, and just approximate what each table entry should be using a neural network.
The pi example was just to show that how you implement a function (any function) does not matter, as long as the inputs and outputs are the same. Or to put it another way if you give me an index, then you wouldn’t know whether I got the result by doing some computations or using a precomputed table.
Likewise, if you give me a sequence of tokens and I give you a probability distribution, you can’t tell whether I used A NN or just consulted a precomputed table. The point is that given the same input, the table will always give the same result, and crucially, so will an llm. A table is just one type of implementation for an arbitrary function.
There is also no requirement for the state transiiltion function (a table is a special type of function) to be understandable by humans. Just because it’s big enough to be beyond human comprehension, doesn’t change its nature.
i bought an original cartridge and played it on the vcs i iherited from dad