there’s someone who uploaded this same meme before the reddit thing but instead of 2000 it said 20
edit: I was linked to the post a few months ago maybe but I can’t find it now
e
there’s someone who uploaded this same meme before the reddit thing but instead of 2000 it said 20
edit: I was linked to the post a few months ago maybe but I can’t find it now
Ok, i guess its just kinda similar to dynamic overclocking/underclocking with a dedicated npu. I don’t really see why a tiny 2$ microcontroller or just the cpu can’t accomplish the same task though.
Ram is slower than GPU VRAM, but that extreme slowdown is due to the bottleneck of the pcie bus that the data has to go through to get to the GPU.
there are some local genai music models, although I don’t know how good they are yet as I haven’t tried any myself (stable audio is one, but I’m sure there are others)
also minor linguistic nitpick but LLM stands for ‘language model’ (you could maybe get away with it for pixart and sd3 as they use t5 for prompt encoding, which is an llm, i’m sure some audio models with lyrics use them too), the term you’re looking for is probably ‘generative’
from the articles I’ve found it sounds like they’re comparing it to native…
Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu… and I’d be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don’t think we have enough details about this to really judge whether its useful or not, although I’m leaning on the side of ‘not’ for this particular implementation. They never showed any actual comparisons to dlss either.
As a side note, I found this other article on the same topic where they obviously didn’t know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read
The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor’s display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.
We have plenty of real uses for ray tracing right now, from blender to whatever that avatar game was doing to lumen to partial rt to full path tracing, you just can’t do real time GI with any semblance of fine detail without RT from what I’ve seen (although the lumen sdf mode gets pretty close)
although the rt cores themselves are more debatably useful, they still give a decent performance boost most of the time over “software” rt
Yeah, you also have to deal with the latency with the cloud, which is a big problem for a lot of possible applications
well, i think a lot of these cpus come with a dedicated npu, idk if it would be more efficient than the tensor cores on an nvidia gpu for example though
edit: whatever npu they put in does have the advantage of being able to access your full cpu ram though, so I could see it might be kinda useful for things other than custom zoom background effects
it doesn’t seem all that hard to make, as long as you don’t mind the severely reduced flexibility in capacity and glass bottles shattering against each other at the bottom
Wow, I’m glad I have auto-renew enabled.
The fact that I can go on eBay and get an actually usable laptop for $40
Like, I was playing around with freecad on it a couple days ago. It just works. The fact that I can get a fully functional personal computer for cheaper than 8 hamburgers is crazy.
Yea, without an archive the internet is probably the least permanent form of media we’ve invented so far
The intent comes from the person who writes the prompt and selects/refines the most fitting image it makes
It wasn’t that new (2017), it just had weird hardware which iirc only recently got supported without proprietary drivers by the new audio system.
This is funny because on a laptop I had I did this exact same progression - I started on Debian, but it didn’t have the right kernel version for my audio drivers, so I switched to Fedora, but it was running slowly (probably because of gnome, it lets you choose so this was my fault) so I moved to arch (with xfce) because it has a reputation for being relatively lightweight. It worked better, but it took longer to get working with the unusual chromebook hardware.
IDK, but I think it’s cool that people have the option. Maybe if you’re just coming up with new ways to do the same things, if they turn out to be better GNU can take inspiration and other distros can switch, benefitting everyone. Or it could just be as a fun hobby, many people do these sorts of things just because it’s what they enjoy doing. I guess it might be the sort of thing you do just to see if it can be done.
Language is always going to change over time. There’s not much anyone can do about it, whether they like it or not. And if you understand what is being said, does it really matter? There have been language mistakes that have slowly been formalized into written language in the past, and I’m sure that will continue into the future.
IMO writing is only really ‘wrong’ if it doesn’t convey the intended meaning or tone (which I’m sure happens a fair amount as well)
next up: microsoft announces development of Bethesda’s next game will be largely outsourced