Are we? Seems like GPUs are not available on launch, or even after a long while. Scalpers are at it again. Should I not wait and look into the used market?

  • infinitevalence
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    I dont think this is misinformation, just a difference of opinion and interpretation of what is known. I also openly admit that I have a tint on this release because I am really hoping that AMD does it right with reasonable pricing and performance.

    As for DLSS/FSR I prefer not to use either because where I actually want faster frames and benefit the added latancy is not worth it, and where I dont care about the extra frames I prefer higher quality details. I find both framegen tech to be a poor service to the end user and thats why I dislike Nvidia’s marketing of the 5070 as equivalent to a 4090 when turning on DLSS. I also dislike their texture compression as an excuse to keep vram artificially low to prevent people from using consumer GPUs for running LLMs.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Ah, so you meant DLSS to mean specifically “DLSS Frame Generation”. I agree that the fact that both upscaling and frame gen share the same brand name is confusing, but when I hear DLSS I typically think upscaling (which would actually improve your latency, all else being equal).

      Frame gen is only useful in specific use cases, and I agree that when measuring performance you shouldn’t do so with it on by default, particularly for anything below 100-ish fps. It certainly doesn’t make a 5070 run like a 5090, no matter how many intermediate frames you generate.

      But again, you keep going off on these conspiracy tangents on things that don’t need a conspiracy to suck. Nvidia isn’t keeping vram artificially low as a ploy to keep people from running LLMs, they’re keeping vram low for cost cutting. You can run chatbots just fine on 16, let alone on 24 or 32 gigs for the halo tier cards, and there are (rather slow) ways around hard vram limits for larger models these days.

      You don’t need some weird conspiracy to keep local AI away from the masses. They just… want money and have people that will pay them more for all that fast ram elsewhere while the gaming bros will still shell out cash for the gaming GPUs with the lower RAM. Reality isn’t any better than your take on it, it’s just… more straightforward and boring.