• 0 Posts
  • 75 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle


  • Two professional 27" 4k dell monitors cost ~$800 combined. You overpaid like a mf if you spend $2000 on a monitor.

    Sorry, but you don’t understand the needs of the market that we’re talking about if you think that a pair of ~$400 dell monitors is equivalent to a high-end display. The difference between $800 and $2500 amounts to a few days’ worth of production for my workstation, which is very easily worth the huge difference in color accuracy, screen real estate, and not having a bezel run down the middle of your workspace over the 3-5 years that it’s used.

    blah blah blah

    I already said that I’m talking about the Vision Pro as a first step in the direction of a fully-realized AR workstation. As it currently stands, it’s got some really cool tech that’s going to be a lot of fun for the guinea pig early adopters that fund the development of the tech I’m personally interested in.


  • What purpose does a MacBook serve that an office from the 1980’s wasn’t equipped to handle?

    AR devices in an office serve the same purpose as existing tools, but there are ways that they can improve efficiency, which is all the justification office tech needs. Shit, my monitor costs 2/3 the price of the Vision Pro, and an ideal piece of AR hardware would be immeasurably better. Meetings in virtual space would negate how much meetings suck remotely. Having unlimited screen real estate would make a huge difference in my line of work. Also, being able to use any area in my home or out of it with as much screen real estate as I want would be huge.

    I’m not saying that the Vision Pro does all of those things, but it does some of them, and I’m 100% okay with it being the thing that introduces the benefit of AR to those without imagination.


  • You are delusional. It’s wild that you’re using sources like Apple’s privacy policy as a source when it directly contradicts what you’re claiming.

    The authoritative sources that you listed explicitly state:

    • Apple only delivers ads in 3 places (App Store, Apple News, Stocks). Contrast this with Google, which delivers ads on virtually every app on every screen you interact with if you’ve got an Android phone.

    • Apple doesn’t share any personal data with third parties for advertising. They also don’t “sell” your data at all. They also don’t buy (or receive) any personal data from third parties to use for marketing. Again, contrast that with Google, whose entire business model is doing each of those things as invasively as possible.

    I’m not claiming that Apple is “moral” or “ethical” or anything like that. But Apple’s profits are driven by them selling hardware, which means that if I’m someone who wants to buy hardware, their interests are at least somewhat aligned with mine. On the other hand, Google’s profits are driven by selling ads that are based on the most emotionally charged personal information they can gather. Any service they provide you is just bait for you to chew on so they can build the inventory they sell to advertisers.

    Sorry, but you really need to lay of the crack my friend.




  • This is just false. That thing you’re buying from Amazon? Just go to the manufacturer’s website and buy it directly. Or if it’s a no-name thing like a generic charging cable, just buy it from literally any other generic [category] retailer.

    My wife and I got sick of paying for prime, so we decided to try going a couple months buying as much as we can directly from the brand’s website. It’s easy. Customer service is way better, selection is way better, I don’t have to worry about getting fake crap. Only downside is that shipping usually takes longer, but that’s a small price to pay.

    Amazon sucks.



  • Okay so your comment about “waddling from the toilet to the bidet” is all someone needs to read to know that you have no idea what you’re talking about.

    Detached bidets exist, but nobody is buying them for $45 on Amazon.

    The type of bidet that people are talking about here are ones that attach to your toilet. You twist a knob to activate the sprayer, which hits where it’s supposed to hit without you having to move.

    You don’t waddle anywhere. It takes 5 seconds to wash. You use one wipe with 3 squares to dry, which is hopefully at least a few times less than you use when you dry wipe. You absolutely feel cleaner afterwards, because you’re using water to remove the shit instead of smearing it around with dry paper.

    The problem that it solves is that you don’t have to walk around with an unwashed ass. Maybe having a disgusting unwashed ass isn’t a problem for you. Maybe if you got shit on another part of your body, you’d just wipe it with some TP and call it good. I’m not judging. Seems weird as hell that you’re trying to shame people who would rather use water to get the shit off, though.


  • I’m the CEO of an anti-phishing training corporation that services multiple Fortune 500 companies and has a yearly revenue of over 10m USD (I can also share unverified credentials to make myself seem more credible).

    Someone could potentially build a website that makes their phishing attempt seem more credible, and maybe they could get that website ranked highly on Google (even though that is far from straightforward for a website presenting fraudulent information to do), but that’s a total red herring. The article didn’t recommend that people Google for a single random website that confirms the questionable information, the recommendation was that you should check multiple authoritative sources.

    You are absolutely wrong. Not surprising that you’re (ostensibly) able to scam the technologically illiterate with such bad information, a little ironic that your scam involves getting them to think that you’re teaching them how to avoid scams.




  • The correct thing to do if you got that email would be to try to verify the information that it presents. Is Geek Squad Academy a real thing? How much does their antivirus cost?

    Which is exactly what the article says to do, and what you should have done before answering the question. Of course the getting the questions right doesn’t matter, but the question and explanation are an excellent example of what they’re trying to teach.

    Also, the grammar was just a little bit funky in that email. Could just be that the geek squad email writer has funky grammar, but it’s definitely a red flag that should make you want to double check the info in the email.


  • You (and half the people in this thread) are totally missing the point here.

    No where does the article say that you’re supposed to be able to tell if it’s a scam or not just by looking at it. In fact, in multiple places it says that you’ve got to Google use a credible source to externally verify some information to determine that some of the examples are scams.

    The point of the article is to teach people how to recognize scams, it would be totally useless if it imposed the constraint that you can’t look for context. If you’re actually trying to recognize scams IRL, you should be doing exactly what the article says and looking for authoritative corroboration of any information in the potential scam.


  • Yeah, but the point is that if you open a web browser and look that settlement up, you’ll find a ton of authoritative sources that link back to that URL.

    The point of this wasn’t to see if you could tell if each thing was likely to be a scam in the context that you would genuinely run into them.

    If my grandma approached me with the class action website and asked if I was a scam, I’d tell her “it looks really suspicious, let’s see if we can find anything from a credible source that will link to this website.” Which is exactly what the article tells you to do. Of course nobody could just magically know if a screenshot of a webpage is scam just by looking at it.

    The other options all either give you enough information in the screenshot to be able to Google a couple things and say “it’s a scam” confidently (class action, geek squad), or they’re full of super blatant red flags (Zelle bike).



  • Sure.

    MacOS is an excellent workspace operating system, largely due to its near-POSIX compliance and the fact that it has access to the enormous body of tools developed for UNIX-like OSs. For development work in particular, it can use the same free and open source software, configured in the same way, that Linux uses. Aside from the DE, a developer could swap between Linux and MacOS and barely realize it. Everything from Node, to Clang, to openJDK, to Rust, along with endless ecosystems of tooling, is installable in a consistent way that matches the bulk of online documentation. This is largely in contrast to Windows, where every piece of the puzzle will have a number of gotchas and footguns, especially when dealing with having multiple environments installed.

    From a design perspective, MacOS is opinionated, but feels like it’s put together by experts in UX. Its high usability is at least partially due to its simplicity and consistency, which in my opinion are hallmarks of well-designed software. MacOS also provides enough access through the Accessibility API to largely rebuild the WM, so those who don’t like the defaults have options.

    The most frequent complaint that I hear about MacOS is that x feature doesn’t work like it does in windows, even though the way that x feature works in windows is steaming hot garbage. Someone who’s used to Windows would probably need a few hours/days to become as fluent with MacOS, depending on their computer literacy.

    People also complain about the fact that MacOS leverages a lot of FOSS software, while keeping their software closed-source and proprietary. I agree with this criticism, but I don’t think it has anything to do with how usable MacOS is.

    I’m not going to start a flame war about mobile OSs because I don’t use a mobile OS as my primary productivity device (and neither should you, but I’m not your mom). The differences between mobile OSs are much smaller, and are virtually all subjective.

    You’re welcome.


  • Having the highest market share doesn’t mean that windows uses logical conventions, it just means that lots of people are accustomed to the conventions that it uses. The vast majority of professionals that I’ve interacted with strongly dislike having to work on a windows machine once they’ve been exposed to anything else.

    Off of the top of my head, the illogical conventions that Windows uses are: storing application and OS settings together in an opaque and dangerous, globally-editable database (the registry), obfuscating the way that disks are mounted to the file system, using /cr/lf for new lines, using a backslash for directory mappings, not having anything close to a POSIX compatible scripting language, the stranglehold that “wizards” have on the OS at every level, etc. ad nausium. Most of these issues are due to Microsoft deciding to reinvent the wheel instead of conforming to existing conventions. Some of the differences are only annoying because they pick the exact opposite convention that everyone else uses (path separators, line endings), and some of them are annoying because they’re an objectively worse solution than what exists everywhere else (the registry, installation/uninstallation via wizards spawned by a settings menu).

    For basic usability functions, see the lack of functional multi-desktop support 20 years after it became mainstream elsewhere. There is actually no way to switch one monitor to a 2nd workspace without switching every monitor, which makes the feature worse than useless for any serious work. In addition to that, window management in general is completely barebones. Multitasking requires you to either click on icons every time you want to switch a window, or cycle through all of your open windows with alt-tab. The file manager is kludgy and full of opinionated defaults that mysteriously only serve to make it worse at just showing files. The stock terminal emulator is something out of 1995, the new one that can be optionally enabled as a feature is better, but it still exposes a pair of painful options for shells. With WSL, the windows terminal suddenly becomes pretty useful, but having to use a Linux abstraction layer just serves to support the point that windows sucks.

    I could go on and on all day, I’m a SWE with a decade of experience using Linux, 3 decades using Windows, and a few years on Mac here and there. I love my windows machine at home… as a gaming console. Having to do serious work in windows is agonizing.


  • Of the three major desktop operating systems, windows is by far the worst.

    The only advantage windows has is that Microsoft’s monopolistic practices in the 90s and 00s made it the de-facto OS for business to furnish employees with, which resulted in it still having better 3rd party software support than the alternatives.

    As an OS, it’s hard to use, doesn’t follow logical convention’s, is super opinionated about how users should interact with it, and is missing basic usability features that have been in every other modern OS for 10+ years. It’s awesome as a video game console, barely useable as an adobe or autodesk machine, but sucks as a general purpose OS.