#927 #928 es que de hecho el DLSS es un AA no un upsampling, o al menos eso es lo que saco yo en claro de la divulgación de Nvidia, no se si en las pruebas que hay se han equivocado al poner 4k+DLSS o que tras hacerse el DLSS con 1440p hacen un reescalado a 4k. Pero la red la entrenan así:
The key to this result is the training process for DLSS, where it gets the opportunity to learn how to produce the desired output based on large numbers of super-high-quality examples. To train the network, we collect thousands of “ground truth” reference images rendered with the gold standard method for perfect image quality, 64x supersampling (64xSS). 64x supersampling means that instead of shading each pixel once, we shade at 64 different offsets within the pixel, and then combine the outputs, producing a resulting image with ideal detail and anti-aliasing quality. We also capture matching raw input images rendered normally. Next, we start training the DLSS network to match the 64xSS output frames, by going through each input, asking DLSS to produce an output, measuring the difference between its output and the 64xSS target, and adjusting the weights in the network based on the differences, through a process called back propagation.
Y van a sacar un DLSSx2 que es lo mismo pero usando 4k como input en lugar de 1440p:
In addition to the DLSS capability described above, which is the standard DLSS mode, we provide a second mode, called DLSS 2X. In this case, DLSS input is rendered at the final target resolution and then combined by a larger DLSS network to produce an output image that approaches the level of the 64x super sample rendering – a result that would be impossible to achieve in real time by any traditional means
https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/