• 13 Posts
  • 518 Comments
Joined 2 years ago
cake
Cake day: August 11th, 2023

help-circle
  • An OS or a hypervisor can run in bare metal. If I have Windows running in KVM, KVM is running bare metal but Windows isn’t. Ditto with ESXi or Hyper-V. In the case of your setup Linux and KVM are both bare metal, but Windows isn’t. KVM, ESXi, Xen are always running a privilege level above their guests. Does this make sense?

    The difference between KVM and the more conventional Type 1 hypervisors is that a conventional type 1 can’t run along side a normal kernel. So with Linux and KVM both Linux and KVM are baremetal. With Linux and Xen, only Xen is baremetal, and Linux is a guest. Likewise if you have something like Hyper-V or WSL2 on Windows, then Windows is actually running as a guest OS, as is Linux or any other guests you have. Only Hyper-V is running natively. Some people still consider KVM a Type 1, since it is running bare metal itself, but you can see how it’s different to the model other Type 1 hypervisors use. It’s a naming issue in that regard.

    It might help to read up more on virtualization technology. I am sure someone can explain this stuff better than me.


  • Yes I know GPU passthrough is possible. Almost no one does it as consumer GPUs don’t normally don’t support the virtualization technologies that allow multiple OSes to use one GPU. It’s an enterprise feature mostly. There are projects like VirGL that work with KVM and QEMU, but they don’t support Windows last I checked, and are imperfect even on Linux guests. I think only Apple Silicon and Intel integrated graphics support the right technologies you would need. Buying a second GPU is a good option, although that has it’s own complexities and is obviously more expensive. Most modern consumer platforms don’t have enough PCIe lanes to give two GPUs a full x16 bandwidth. There is a technology in Windows called GPU paravirtualization to make this happen with Hyper-V, but you have to be using a Hyper-V host, not a Linux based one. It’s also quite finicky to make that work.

    Out of interest what games are you running that don’t need GPU performance? Basically any modern 3D game needs a GPU to run well. Obviously 2D games might not, though even that varies.

    All of the above is far more complex than setting up a dual boot. A dual boot can be as simple as having two different drives and picking which to boot from in the UEFI or BIOS firmware. I don’t understand why you think that would be less complicated than a high tech solution like virtualization.

    There are basically three types of virtualization in classical thinking. Type 1, Type 2, and Type 3. KVM is none of these. With Type 1 there is no operating system running bare metal, instead only the hypervisor itself runs as bare metal. Everything else, including the management tools for the hypervisor, run in guest OSes. Hyper-V, ESXi, and anything using Xen are great examples. Type 2 is where you have virtualization software running inside a normal OS. KVM is special because it’s a hypervisor running in the same CPU ring and privilege level as the full Linux kernel. It’s like if a Type-1 hypervisor ran at the same time as a normal OS in the same space. This means it behaves somewhat like a Type-1 and somewhat like a Type-2. It’s bare metal just like a Type-1 would be, but has to share resources with Linux processes and other parts of the Linux kernel. You could kind of say it’s a type 1.5. It’s not the only hypervisor these days to use that approach, and the Type 1, 2, 3 terminology kind of breaks down in modern usage anyway. Modern virtualization has gotten a bit too complex for simplifications like that to always apply. Type 3 had to be added to account for containers for example. This ends up getting weird when you have modern Linux systems that get to be a Type-1.5 hypervisor while also being a Type 3 at the same time.


  • That’s not how that works. I think your confusing bare metal with bare metal hypervisor. The latter is meant to mean a Type-1 Hypervisor, which KVM isn’t anyway but that’s another story.

    Without GPU pass through you aren’t going to get nearly the graphics performance for something like gaming. I’ve also had issues with KVM and libvirt breaking during sleep. It’s a lot more janky than you make out.



  • Battery charging circuits don’t operate continuously when the device is charged. Pi also still needs a PSU, typically a phone charger, and for a server application would need an SSD or HDD in most cases. SD cards have lower performance, write endurance, and capacity after all. A single raspberry pi couldn’t match even a somewhat old laptop in performance. In terms of actual efficiency (performance per watt) Pis don’t do that well as they are using cheap processors made using old core designs and even older process nodes. Even the latest Pi 5 uses a 16nm process node with a core design from 2018. A 10 year old laptop might have 14nm process node which would be better. This means that a laptop would have more performance, so even if it had more power consumption at peak it could still end up with significantly better performance per watt, and that extra performance allows it to idle more often as it spends less time processing requests.

    Of course the ultimate in performance per watt is always going to be a modern high power server or an Apple Silicon device. Mini PCs can also do well for home use, and are much lower power so better suited to less demanding usage, and have the best performance per watt for consumer devices. The M4 Mac Mini for example is pretty much best in class in performance per watt, and low power consumption at the same time.


  • I mean sure. If you want to compare actual efficiency then performance per watt is the metric. Here a laptop would easily win as it has higher performance for similar power. The TDP of a U class processor is only 15W normally. It would obviously help to disable things like Turbo Boost as well. Said laptop having more performance wouldn’t need to stay at high power states for as long as the Pi either as it takes less time to process requests. Returning back to idle faster is a big advantage.


  • Bro please. I understand you can host very small stuff on less powerful Pis. I used to host some stuff on a Raspberry Pi model b myself. Stop tooting your own horn. You couldn’t however host all the stuff I use or even most home labbers use on a Pi zero with modern software. I doubt it could run Jellyfin, an *arr stack, ollama, nextcloud, etc all at the same time. Probably you would also have to drop using containers which would be less secure and easy to deploy.

    What’s the performance per watt of a Pi Zero anyway? I am sure it’s low power draw but I doubt it’s actually efficient.




  • Benchmark it and tell me. The truth is that most RPis are made using older process nodes to reduce costs. Laptops are often made using the best avaliable process node and core design. A modern raspberry pi 5 uses a 16nm processor with Cortex-A76 design from 2018. A laptop in 2015 would be using 14nm Broadwell processors from Intel. This was a time when 15W U series processors were gaining popularity, so sustained load power consumption is quite low. A 2015 laptop is 10 years old, and wouldn’t run Windows 11, so will be ewaste this year. Same with a lot of 8 year old machines actually.




  • How many people keep up with this misconception RPis are ultra efficient? They are bargain basement SoCs. The power draw isn’t that low (25W PSU), and the performance they deliver isn’t great. They are all made using older process nodes and techniques that result in less efficient processors. Add those together and you will find they have less performance per watt than all modern laptops with the screen turned off, and less than most Mini PCs. Mini PCs and other SBCs are where it’s at for efficient home labs. If you can find a Mac with Apple Silicon for cheap they are even better. Everyone in the home labbing community pretty much knows this by now. I struggle to understand why Lemmy hasn’t got this through their brains. I think it’s partially the miconception that ARM is always better, and partially down to people not understanding that low maximum power draw and efficiency are the same thing. Not even thinking about idle power or performance per watt.




  • A raspberry pi is not as efficient as people are claiming. They need up to 25W PSU for a reason. Laptops can idle lower than that certainly. Something like a MacBook Air M1 would idle in single digit territory, as would any netbook basically ever made. Only really high performance or older laptops have idle power draw issues since battery life is a major selling point of a laptop. Said laptop is probably also faster than a raspberry pi. The people building Pi clusters are really not doing themselves any favors with power efficiency.




  • Laptops don’t even use that much power. You guys are really not into home labbing or as good with tech as you think you are lol. Lots of people run older real servers and desktops as home servers. They use way more power than laptops. Raspberry Pis sound good but use progressively more power in each generation, and still struggle to compete with mini PCs and even older laptops in performance. They also never had good performance per watt. In performance per watt basically nothing beats a Mac Mini, though other mini PCs are also good. Laptops aren’t bad in energy efficiency either. They are literally designed to run on battery so have as little idle draw as possible. They would be comparable to a mini PC if you turn off the display.

    Edit: Modern RPis apparently use 25W, which is firmly in the territory of what a laptop would use when not running the screen or charging the battery.