• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I didn’t understand the “forced upgrade” argument until now. Yea I guess you’re right, at some point you have to do updates (they nag about upgrading to 11 but you can skip that indefinitely). But with how popular Windows is you have options for a lot of problems (including forced updates which to be fair shouldn’t be ignored when it comes to security patches).

    If you open up Chris Titus Tech’s Windows Utility (https://christitus.com/windows-tool/) you basically have a comprehensive list of all the ameliorations one could ever want at their disposal. That’s really the main thing Windows still has going for it, it’s a decades-long mainstay which means there are plenty of knowledgable people out there who know how it can be made to heel even if Microsoft decide to force a Microsoft account on you, telemetry, whatever it may be, there will probably always be a way around it.

    For example one of my main gripes with Windows 11 is how you can’t make the taskbar show all tray icons anymore by default. They removed window titles in the taskbar so now everything is basically a square down there meaning there’s all this empty space between my open windows and the tray. But of course someone out there has written a program to automatically unhide all tray icons and thrown it on GitHub.

    To me personally it doesn’t matter how crappy the design choices are as long as they can be mitigated. If bad corporate decisionmaking is a dealbreaker (which is also a fair assessment) then you have to ditch the corporation entirely and go Linux or what have you. Not trying to be smart or anything but there really is no reason to stay on Windows left anymore. Maybe if you absolutely need Microsoft Office or something but ever since Proton came out the issue with Windows-only games has pretty much evaporated.

    Switching to Linux without prior experience will challenge even the most tech-savvy, but it’s an investment worth making many times over.


  • I used Nextcloud for both files and my PortableApps for years but it always had a hard time managing all those tens of thousands of small files. Lots of sync overhead. So I found Seafile and couldn’t be happier. I don’t just have my PortableApps in there now, I sync my Windows Documents, Pictures, Videos and Downloads folders. Seafile is very good at tracking partial changes in files so it doesn’t always need to sync an entire file when just part of it changed.

    Also: It’s just a file sync service without any auxiliary features.


  • Not the guy you’re asking but I agree. There would be no need for Falcon Sensor on every Windows-machine deployed inside an Enterprise (assuming that Falcon Sensor serves a purpose worth fulfilling in the first place) if the critical devices on their network were sufficiently hardened. The main problem (presumably the basis of such a solution existing) is that as soon as you have a human factor, people who must be able to access critical infrastructure as part of their job, there will be breakages of some kind. Not all of those must be malicioius or grow into an external threat. They still need to be averted of course.

    I feel that CrowdStrike is an idea that seems appealing to those making technological decisions because it promises something that cannot be done by conventional means as we have known and deployed them before. I can’t say whether or how often this promise has ever enabled companies to thwart attacks at their inception, but again, I feel that in a sufficiently hardened environment, even with compromisable human actors in play, you do not need self-surveillance (at the deepest level of an OS) to this extent.

    And to also address OP’s question: of course there is no need for this in a *NIX environment. There hasn’t been any significant need for antivirus of any kind in any of the UNIX-based world including macOS. So really this isn’t about whether an anti-malware solution in itself can satisfy the needs of a company per se, the requirements very much follow the potential attack vectors that are opened up by an existing infrastructure. In other words, when your environment is Windows-based, you are bound to deploy more extensive security countermeasures. Because they are necessary.

    Some may say that this is due to market-share, but to those I say, has the risk-profile of running a Linux-based server changed over the last 20 years? They certainly have become a lot more common in that timeframe. One example I can think of was a ransomware exploit on a Linux-based NAS-brand, I think it was QNAP. This isn’t a holier than thou argument. Any system can be compromised. Period. The only thing you can ensure is that the necessary investment to break your system will always be higher than the potential gain. So I guess another way to put this is that in a Windows-based environment your own investment into ensuring said fact will always be higher.

    But don’t get me wrong, I don’t mean to say Windows needs to be removed from the desks of office-workers. Really this failure and all these photographs of publically visible bluescreens (and all the ones in datacenters and server-rooms that we didn’t see) shows that Windows has way too strong of a foothold in places where plenty smart people are employed to find solutions that best serve the interests of their employers, including interests (i.e. security and privacy) that they are unaware of because they can’t be printed on a balance-sheet.



  • Serious question how do you get bored of Windows during its heyday?

    My first experience with Linux was Ubuntu 4.10 and it seemed super cool and all but I could’ve never switched fully during those days. And if we’re honest most legit Linux users up until not too long ago were forced to have a dual boot setup because so many things just hadn’t been universalized yet.

    So just to illustrate where I’m coming from asking that question, my first personal computer (as opposed to family PC) ran XP and that was a pretty exciting time when it comes to market dominance and all the advantages that came with being a user of the biggest platform. Looking back I just don’t see how I could’ve ever made that switch in the noughties let alone the 90s. The adoption just wasn’t there yet.



  • Ah yea I didn’t realize the official dock has 2 ports for display output. Valve is bae.

    There are definitely docks that have 3 display outputs, which would be a viable option if you also buy the Wacom Link Plus. I personally don’t know of any docks that have 2 display outputs and a USB-C port that is display-capable. There may be Thunderbolt ones but Steam Deck doesn’t do Thunderbolt unfortunately.

    So yea I guess your only option is a different dock plus Wacom Link Plus. I don’t see any other personally.


  • So the setup is currently two external monitors? Or does that include the Deck monitor? Is the USB connection to the Wacom just for pen input or does it transfer image as well? If USB-C is used as the monitor port it most definitely will not work with USB-A of any kind. Not even USB-A 3.1. You either need a different dock with a USB-C port or you need the Wacom Link Plus (which means you probably also need a different dock with at least 2 HDMI ports or one HDMI and one DisplayPort).






  • Mark my words. Don’t ever use SATA to USB for anything other than (temporary) access to non critical preexisting data. I swear to god if I had a dollar for every time USB has screwed me over trying to simplify working with customers’ (and my own) drives. Whenever it comes to anything more advanced than data level access USB just doesn’t seem to offer the necessary utilities. Whether this is rooted in software, hardware or both I don’t know.

    All I know is that you cannot realistically use USB to for example carbon copy one drive to another. It may end up working, it may throw errors letting you know that it failed, it may only seem to have worked in the end. It’s hard for me to imagine that with all the individual devices I’ve gone through that this is somehow down to the parts and that somewhere out there would be something better that actually makes this work. It really does feel like whoever came up with the controlling circuits used for USB to SATA conversion industry-wide just didn’t do a good enough job to implement everything in a way that makes it wholly transparent from the view of the operating system.

    TL;DR If you want to use SATA as intended you need SATA all the way to the motherboard.

    tbh I often ask myself why eSATA fell by the wayside. USB just isn’t up to these tasks in my experience.


  • Look. You can’t have it both ways. You can either be the “i use arch (and so should everybody else) btw” guy or you can be dumbfounded by people accusing you of being the “i use arch (and so should everybody else) btw” guy. If you do both (in succession I guess) you’re just a parody of your own pro-FOSS message.

    I know I’m probably opening another can of worms by saying this but I’m an absolute privacy advocate. And guess what? I use multiple Windows-installations as part of my day-to-day. Yes I do want that number to migrate towards zero but so far, especially when it comes to laptops (and more so laptops with multiple GPUs) I just never saw any appeal in crippling my own experience just for the sake of subjective “freedom”.

    So now imagine a person like me trying to look for help setting up a Pi-hole installation for the sake of privacy. In comes the evangelical “If you actually truly care about your privacy, why are you using Windows?” Sound familiar? How about helpful (in terms of getting someone closer to a Pi-hole installation)?



  • Said like a person that doesn’t want to “argue till the end of the universe”. Maybe just take the hint once there’s multiple people trying to politely tell you the same thing? Prove that you’re not just good at fortifying the walls around your bubble. Criticism is rarely meant to attack us. Nobody is accusing you of a crime. I know it’s hard to take that step back from one’s own perspective.

    Again, just because something works for you doesn’t mean you have to be evangelical about it. Don’t try to be the “I use arch btw” meme for real.


  • Once you face the (seemingly) inevitable necessity of further hardware purchases it does become sort of tedious I must say. I used to treat my raid parity as a “backup” for way longer than I’d like to admit because I didn’t want my costs to double. With unraid I at least don’t have the same management workload that I have on my main box where I have a rolling release Arch with manually installed ZFS where the build always has to line up with the kernel version and all that jazz. Unraid is my deploy and forget box. Rsync every 24h. God bless.

    Proxmox has been recommended to me before I switched my main server to Arch but once I realised that it has no direct docker support I thought I’d rather just do things myself. It really is a matter of preference. It’s kind of hard to believe that all the functionality in Proxmox can be had for absolutely free.


  • don’t owe OP an answer

    Exactly. Since its dawn forums on the internet have been full of people countering legitimate questions with “why would you even ask that?”. Not only is nobody owed your “contribution”, it is of zero value.

    because something exists doesn’t mean it should be installed

    Elitist much. Why would you rather assume that a tech-savvy person is asking for tech guidance than the infinitely more likely opposite case? The answer is because you (elitist) think what works for you is the only valid path and all must be guided to your subjective treasure. Your intentions may be benign but your methods are not.


  • It’s understandable that you want to take your virtualization-capabilities to the next level but I also don’t see the appeal of containerizing unraid like many others here. I started using unraid last autumn and to me it really is about being able to mix drive sizes. It’s a backup to my main server’s ZFS pool so (fingers crossed) I don’t even really worry about drive failures on unraid. (I have double parity on ZFS and single parity on unraid.)

    Anyways my point is I started out with 8 SATA slots plus an old USB-based enclosure with i set to JBOD mode and that was a pretty stupid idea. unraid couldn’t read SMART data from those USB drives. Every once in a while one of the drives would suddenly show up as having an unsupported partition layout. Couple weeks ago all 5 drives in the enclosure started showing up as unusable. So as you can imagine I dropped that enclosure and now am working solely off the 8 internal slots. I’d imagine that virtualizing unraid’s disk access might potentially yield similar issues. At least the comments of people here remind me of my own janky setup.