• 43 Posts
  • 1.07K Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle








  • Can we stop with the fake frame nonsense? They aren’t any less real than other frames created by your computer. This is no different from the countless other shortcuts games have been using for decades.

    Also, input latency isn’t “sacrificed” for this. There is about 10 ms of overhang with 4x DLSS 4 frame gen, which however gets easily compensated for by the increase in frame rates.

    The math is pretty simple on this: At 60 fps native, a new frame needs to be generated every 16.67 ms (1000 ms / 60). Leaving out latency from the rest of the hard- and software (since it varies a lot between different input and output devices and even from game to game, not to mention, there are many games where graphics and e.g. physics frame rate are different), this means that at three more frames generated per “non-fake” frame and we are seeing a new frame on screen every 4.17 ms (assuming the display can output 240 Hz). The system still accepts input and visibly moves the view port based on user input between “fake” frames using reprojection, a technique borrowed from VR (where older approaches are working exceptionally well already in my experience, even at otherwise unplayably low frame rates - but provided the game doesn’t freeze), which means that we arrive at 14.17 ms of latency with the overhang, but four times the amount of visual fluidity.

    It’s even more striking at lower frame rates: Let’s assume a game is struggling to run at the desired settings and just about manages to achieve 30 fps (current example: Cyberpunk 2077 at RT Overdrive settings and 4K on a 5080). That’s one native frame every 33.33 ms. With three synthetic frames, we get one frame every 8.33 ms. Add 10 ms of input lag and we arrive at a total of 18.33 ms, close to the 16.67 ms input latency of native 60 fps. You can not tell me that this wouldn’t feel significantly more fluent to the player. I’m pretty certain you would actually prefer it over native 60 fps in a blind test, since the screen gets refreshed 120 times per second.

    Keep in mind that the artifacts from previous generations of frame generation, like smearing and shimmering, are pretty much gone now, at least based on the footage I’ve seen, and frame pacing appears to be improved as well, so there really aren’t any downsides anymore.

    Here’s the thing though: All of this remains optional. If you feel the need to be a purist about “real” and “fake frames”, nobody is stopping you from ignoring this setting in the options menu. Developers will however increasingly be using it, because it enables previously impossible to run higher settings on current hardware. No, that’s not laziness, it’s exploiting hardware and software capabilities, just like developers have always done it.

    Obligatory disclaimer: My card is several generations behind (RTX 2080, which means I can’t use Nvidia’s frame gen at all, not even 2x, but I am benefiting from the new super resolution transformer and ray reconstruction) and I don’t plan on replacing it any time soon, since it’s more than powerful enough right now. I’ve been using a mix of Intel, AMD and Nvidia hardware for decades, depending on which suited my needs and budget at any given time, and I’ll continue to do use this vendor-agnostic approach. My current favorite combination is AMD for the CPU and Nvidia for the GPU, since I think it’s the best of both worlds right now, but this might change by the time I’m making the next substantial upgrade to my hardware.




  • As a more portable and budget-friendly alternative, consider a small emulation console. I’m very happy with my Anbernic RG35XXSP. Since the screen folds like on the original GBA SP, it’s absolutely tiny and fits into any pocket - without having to worry that the screen might scratch. Configure it correctly and you can close the screen to suspend games.

    This kind of system would also make for a great first gaming device once your kid is around five years old.