You know what, good. If the past 5 years have been any indication, we need to stop pushing the bleeding edge as much and focus on stability.
I want games that run well just as much as I want them to look good.
Should also help with the pricing.
So, the $1500-2000+ GPU tier will get even more expensive, with no competition for Nvidia. But then, people buying those have never cared about price. They’ll pay any price to say they have the biggest & fastest.
I’m more concerned about $500 & under GPUs that I’d actually consider.
I don’t buy them for bragging rights, I buy them because I want to future proof my builds. Idgaf about anyone knowing anything about my builds or costs. I simply want performance and quality. Don’t generalize.
Also, don’t forget people who actually need it for their profession.
Who are completely fucking irrelevant to this discussion, but go on and list out a few more sub-niche instances that are exceptions to an obvious reference to whales buying whatever the most expensive thing is. Totally worth your time.
Hello, I am a gamer first, Blender modeller and animator, like 4th. I needed to upgrade my GPU to actually work in Cycles. So, as it was around by birthday, I bought a top end Nvidia card.
I can buy a GPU for more than one reason.
Good for you. Don’t care. Learn to fucking read like the part where the OP was clearly talking about goddamn whales.
If you don’t know the difference and you get butthurt about this, you’re probably a fucking whale.
Have fun posting your 4 second faster render times to CGTalk. I’m sure they’re all going to suck your dick for it.
I know your just trying to troll and be obnoxious, but legitimately my render times dropped by an obscene amount.
20 mins per frame down to about 30 seconds over a 250 frame animation is very significant for me.
That’s nice.
What does it have to do with whales continually buying new shit they don’t need just to impress people? Nothing? Yeah, that’s kind of the point.
And with that kind of performance increase you’re not buying year over year, are you? Maybe every 3-5 years, possibly longer?
There’s literally nothing about any of these posts that pertains to you, and yet here you are.
I mean, its not like AMD has been able to for several generations. I wouldn’t expect them to now.
Though I wish they would, because nVidia is getting away with some bullshit pricing thanks to not having any competition.
deleted by creator
Conceding second place to a card that doesn’t even fucking exist, that will be following up a massive failure which can’t produce numbers anywhere near the competition at the same price point.
Sure, they’re totally doing that by allowing Nvidia to waste time and money developing whaleboards that Gamers Nexus can jack off over.
AMD seems incapable of competing with Nvidia at the high end, they can’t make FSR as good as DLSS and they are still far behind in RT
I think it’s more that they’re unwilling. AMD goes after low hanging fruit and targets the mass market. In essence, they’re willing to let NVIDIA invest in all of the new tech, and then they implement whatever gets popular.
So unless they decide to truly prioritize their GPU business, they’ll be happy to target the quiet majority who care mostly about price to performance while focusing on innovating on the CPU side of the business where they make their real money.
I’m sure they could compete on the GPU side if they threw money at the problem, but they don’t see a need to when it’s decently popular and they’re seeing a lot more growth and profit on the CPU side.
If you look at the die sizes, it becomes clear AMD are not targeting the super top end.
And that’s how it has been for a long time at AMD.
Look at CPUs, they were in a comfortable second place as the economy option for many years, and when they tried something new, it blew up in their face (Bulldozer).
Ryzen was all about the chiplet design first, and architecture improvements second. They didn’t go for the most innovative core design or smallest process (they didn’t even have a fab), they went for the economical option (chiplets have better yields). They were able to catch up with Intel with IPC gains, but Ryzen was pretty uninteresting aside from that. Even today, Zen 4 is just an iteration on the chiplet design, and they’re beating Intel because Intel struggled with lithography issues, and Intel is also trying novel things that haven’t resulted in a clear win vs AMD. So AMD is happy to attack yields (chiplets) and innovate by extension (add-on cache) instead of trying something radical with core design.
Their GPUs are going the same way. NVIDIA is trying hard with RT cores, whereas AMD mostly reused regular shader cores initially. NVIDIA is building a huge model for DLSS, AMD just applies a simple, one-size fits most filter on top. NVIDIA goes for the best experience for the high end, AMD just goes for a pretty good experience for most.
I don’t see that changing, that has been AMD’s main playbook since Intel overtook them after the x64 transition.
If what MLID saying is true then yeah that’s possible
And AI/ML workloads. Nvidia gets lots of shit and is more expensive but you get a better ecosystem with their cards.
A good portion of this though is the CUDA stranglehold nvidia has. Good luck getting a neural net accelerated on OpenCL or Vulkan Compute.
AMD do seem to be taking steps in the right direction here, still a while away from a more balanced landscape.