• 0 Posts
  • 273 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • I’m 31 and I only really started playing games around 4 years ago, apart from playing on bootleg NES consoles or C64 as a kid.

    It is worth it if you have fun doing it, and you probably will!

    If you don’t know where to start, you probably still haven’t figure out what genres you’d be into.

    You might like Steam Deck, an affordable console-like handheld PC, because:

    • It offers a wide variety of games from all generations, so if you want to experiment with different genres you can always find something for yourself - you can purchase a game on Steam store and if it’s not for you, just return it below 2h of gameplay
    • Very user friendly, easy to navigate for non-techies, despite being PC, for the most part it just works, great entry for folks with no prior experience with PC gaming
    • It’s a handheld! Take it with you anywhere easily, play in bed, on couch, toilet, whatever. If you’re used to playing on a phone, this might be appealing
    • you can still dock it as a regular PC and have mouse+keyboard+external screen if you want to try gaming this way
    • if you want to tinker to explore even further, you can emulate older consoles, play with 3rd party launchers, use it for other things than gaming, even replace the software completely - it is all possible

    Other choices are perfectly valid like Nintendo Switch, Xbox or PS5, but they’re within their respective closed ecosystems. With Xbox and PS5 you’re also stuck with TV. Consoles have limited backwards compatibility, so for example Switch only supports games for Switch, PS5 supports games for PS5 and PS4, and it’s a bit better with Xbox iirc.

    If you want Nintendo Switch (if games like Mario or Zelda are appealing to you), maybe wait a little bit as they’re cooking new generation for release soon-ish, and the current one is old and miserable in terms of performance.



  • My bet is it tries to default to mode that your display doesn’t like, probably because of some wrong info in monitor’s EDID downloaded from the connector, but that’s just my guess.

    Before booting, use key e on grub menu, locate line where there is initrd to pass boot parameters. You can force modes using video= parameter, and you can also replace/modify your EDID. Refer to section # Forcing modes and EDID on this page: https://wiki.archlinux.org/title/Kernel_mode_setting

    These changes can also be achieved permanently by editing /etc/default/grub and regenerating its configuration, in case you use grub.

    Easiest would be to have separate extra monitor temporarily or another computer to connect over SSH, but if those low “safe” graphics modes work, that can probably do also.














  • At the very beginning in early 90’s Linux adopted X11 implementation that was XFree86. It was obvious and pragmatic move, because Linux was UNIX clone with full POSIX standard compatibility, and X11 was already there for almost a decade. Porting it allowed for having graphical interface very early on (Linux started in 1991, X11 support was added one year later) and allowed all the contemporary UNIX software to be easily ported to Linux.

    X11 however was designed with completely different needs in mind, as UNIX machines were mostly mainframes or powerful workstations and not home computers. It was about a lot of features that make no sense in this day and age (like network transparency, drawing primitives, printing capabilities, font rendering etc) and its design aged like milk. Xorg (that was fork of XFree86 started after license change) was implemented in a way that allows keeping compatibility for the time being with many issues being worked around and the old solution being effectively forcefully framed into modern use. It’s basically huge

    Wayland started as an idea on how to do graphics on Linux (and other UNIX systems) without X, but it was never meant to be drop-in replacement. That being said, it’s vastly incompatible and the shape towards having Wayland desktops is long process of gradual implementation of new protocols to make it complete eventually.

    Making Wayland possible took redesign of the OS itself. In old days, Linux didn’t think much about graphics and it was the monolithic X server that took responsibility of things like loading video drivers, setting screen modes or pushing stuff to video memory. Wayland was all about split of X’s features outside of X to gradually remove the dependency, so now the kernel has native system interfaces like kernel mode setting, direct rendering manager and so on. It’s not only Wayland taking advantage of it, as the same infrastructure is now used under X too.

    Your experience wasn’t much different because it wasn’t meant to be. Desktops that are ported to Wayland are very good at abstracting things that are specific to both (otherwise completely different) display systems. You can gradually find about some things being different over time as you dive deeper.

    There are certain limitations of X that Wayland doesn’t have:

    • X cannot handle multiple DPI settings, so it is only possible to set one scaling factor globally for all monitors no matter their size/pixel density
    • X could never properly handle multiple refresh rates for different monitors
    • No way for proper HDR support on X
    • VR is not really a good idea on X

    On the other hand, X is very open to the user and applications, providing all sorts of information about opened windows and sniffing input globally by any client (focused or not) is a feature. In 1984 no one really thought cybersecurity will be important factor. So on Wayland:

    • App can’t keylog keyboard presses or mouse movements unless its window is focused (global shortcuts are still unsolved issue, WIP)
    • App can’t directly control its window position and size as it is only controlled by compositor (the idea is to introduce protocol for asking compositor on window positions relative to some area, it’s WIP)
    • App cannot get image of screen or window (this is solved via PipeWire video capabilities and xdg-desktop-portal)
    • Any GUI automation is compositor-specific, at least for now.

    For those and other reasons (like availability of desktop environments and window managers), some still prefer Xserver.





  • Traditionally on Ubuntu-based systems, those packages get installed as dependency of a meta package that pulls the entire desktop experience, for instance on Ubuntu this is ubuntu-desktop (the default GNOME experience), kubuntu-desktop (the KDE Plasma experience) and so on. I believe this won’t be much different for Mint.

    The consequence of uninstalling such package is removal of the meta package. You can totally do that, but then the dependencies (so the cinnamon desktop with everything that makes it Linux Mint) are due for autoremoval of no longer needed packages (so apt autoremove would remove it all) unless they’re marked as explicitly installed and needed by you. Unless they’re “optional” dependencies. It’s hard to tell precisely what will happen without access to actual Linux Mint, but in theory you can just cherry pick whatever you want from that big chunky meta package, or remove it all and only reinstall stuff that interests you.

    I personally wouldn’t bother and just set my default apps to my preference and if the app menu is too crowded I’d hide them using something like Alacarte (old school GNOME menu editor). That way you know that full system upgrades wont cause any problems, and you effectively replace apps as you desire.

    And it’s true that for lightweight system with only what I need, something like Debian or Arch would be much better. My experience is that usually modifying easy-to-use distribution is (while perfectly possible) more effort than building one from the ground up.