• 3 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: September 7th, 2023

help-circle

  • I’m not sure what you mean when suggesting Linux is a singular implementation around which features are exclusively designed. There’s all kinds of software that runs on all kinds of different OSes. Userspace applications, for example, can take advantage of POSIX compatibility to ensure that they run on all platforms (Linux, BSDs, even Windows).

    Does systemd have any similar sort of compatibility guarantee? Can I run systemd-whateverd on BSD? Can I run systemd itself on BSD? I’m pretty sure most other init systems support at least one other OS if not more. Would the maintainers even support merging patches that do this? What about musl?




  • +1. systemd is something the Linux ecosystem really needs, but its execution is abysmal. We should be designing around standards so the best product can win. We should not be designing around singular implementations that could make it easy for Red Hat to execute a EEE strategy to consolidate Linux on the workstation.

    I can’t wait till a crowdstrike-like flaw is exposed in systemd so we can all see how terrible^W wonderful monocultures can be.




  • The full write-up can be found here and should be fairly readable for users of this forum.

    Some quotes that I thought were interesting:

    With a heap corruption as a primitive, two FILE structures malloc()ated in the heap, and 21 fixed bits in the glibc’s addresses, we believe that this signal handler race condition is exploitable on amd64 (probably not in ~6-8 hours, but hopefully in less than a week). Only time will tell.

    So 64-bit systems seem to be a bit more resistant to this it seems? But I can’t be completely sure given how much I’ve read about this yet.

    This vulnerability is exploitable remotely on glibc-based Linux systems, where syslog() itself calls async-signal-unsafe functions (for example, malloc() and free()): an unauthenticated remote code execution as root, because it affects sshd’s privileged code, which is not sandboxed and runs with full privileges. We have not investigated any other libc or operating system; but OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.

    It seems that non glibc-based systems also could be vulnerable, but they have not yet tried to demonstrate it yet (or have tried and not been successful).

    And OpenBSD wins again it seems.


  • I would vote for docker as well. The last time I had to inherit a system that ran on virtual machines, it was quite a pain to figure out how the software was installed, what was where in the file system, and where all the configuration was coming from. Replicating that setup took months of preparation.

    By contrast, with Docker, all your setup is documented. The commands that were used to install our software into the virtual machines and were long gone are present right there in the Docker file. And building the code? An even bigger win for Docker. In the VM project, the build environment for the C++ portion of our codebase was configured by about a dozen environment variables, none of which were documented. If it were built in Docker, all the necessary environment variables would have been right there in the build environment. Not to mention the build commands themselves would be there too, whereas with VMs, we would often have developers build locally and then copy it into the VM, which was terrible for reproducibility and onboarding new developers.

    That said, this all comes down to execution - a well-managed VM system can easily be much better than a poorly managed Docker system. But in general, I feel that Docker tends to be easier to work with than a VM. While Docker is far from flawless, there are a lot more things that can make life harder with VMs, at least from my experience.


  • Do you know how vim has distributions like lunarvim, lazvim, nvchad, etc.? Simply installing something like lazyvim can quickly and easily convert vim from a text editor to a full blown IDE.

    I think Gnome needs something like this. A curated set of plugins that are easy to install and maintain compatibility with different versions of Gnome - something that would deal with the API churn in Gnome while maintaining a stable, usable desktop environment.

    I don’t know if this is feasible, because I haven’t used Gnome since 2.x, but I think it would really help make it an actual full blown DE.


  • I doubt that you’re interested in arguing in good faith, but if by some miracle you do care about having an informed opinion before opening your mouth, how would you respond to things like this?

    This essentially killed my (EU-based) startup in the project management and collaborate space. Before MSFT bundled Teams with O365 we were rapidly growing and closing enterprise customers in the automotive, energy and education industries with high retention rates. Right around the time the Teams bundling started our retention dropped, churn went through the roof, growth slowed down, we failed to raise our next round because of it and had to drastically downsize the company, causing even more churn (about 80% net churn in 2 years). This move by the EU is good, but too little too late - 99% of the companies that were hurt by this have already shut down, and the ones still running will take years to recover…


  • Interesting! Sorry, I don’t know why I thought you were using swipe keyboards, it must have been stuck in my memory from reading other comments. I definitely agree that pressing the buttons was a little annoying, but manufacturers could probably make softer buttons if they were willing to put the money into developing them.

    Anyway, I really miss the phone I had from about 2008-2010. It had two sliders that moved in orthogonal directions. One of the slide directions revealed a standard 12-button phone pad, while the other had a 4-row keyboard. And yet, I’m pretty sure it was under 1.5cm, so not too large. It was definitely easier to keep in my pocket than current phones!

    If it weren’t for reading Lemmy/RSS feeds and a camera, I’d probably be going back to dumb phones for my next one…


  • I just use the lazy plugin manager (not to be confused with lazyvim) to set up a few plugins for my environment. I followed this guide and just chose only the plugins and configuration that I like. I’ve used vim for over 15 years now but have only used plugins for the past 2-ish years, so I don’t like it when distributions mess up existing keybindings and other default behaviors. Lazy makes it very easy to set up your own environment and I was able to learn how to do it in a relatively short time with some guides and tutorials.

    It’s not for the faint of heart, but for me, I think the personal customizability is worth it, as well as not having plugins installed that I don’t want/need. A lot of the time, they’re more modern, but they would also require unlearning my existing habits and learning new ones, which I would rather not do, so I prefer doing it this way.

    But I will say that it can be helpful to look at existing distributions to see how they implemented configurations when I run into trouble with my own configurations. Sometimes I’ll steal their keybindings and maybe adjust them to my own preferences. It’s also a great way to explore new functionality and configuration options as well.


  • But what’s the error rate? I could type at 200 words per minute (even on a phone!!) if I didn’t care about how many typos I was making. And swiping keyboards get confused incredibly easily. The error rates are especially bad when you’re writing words that only use a single row of keys - on QWERTY keyboards for example, try writing something like “type”, and you could get that, or you might get something else, like wipe/write/ripe. Other groups could include things like tip/top, pit/pot, wit/wire and the selected word will be wrong almost as frequently as it’s right. And autocorrect systems can’t really correct for things like when you mean to press enter and hit the backspace key instead. Plus, their suggestions are generally just very stupid. So while buttons take longer to press on physical keyboards, the reduced error rate makes typing speed about the same in my experience.

    Plus, with physical buttons, you get tactile feedback, so you can tell when your fingers are slightly off and adjust them, whereas on a flat surface, you have no idea whether you pressed the correct button or not. You have to stare straight at the screen to make sure every press is correct, which is exhausting and bad for your eyesight. I feel a lot more eyestrain from simply typing on phones, whereas with physical buttons, I didn’t even have to look at the screen, and I could look at something else around me while typing. And don’t get me started on how many calls I’ve missed because I accidentally hit the hang-up button, or couldn’t find the accept call button - not a problem when you have physical buttons!

    Regarding screen real estate, all you need is a slide-out keyboard. They work great!

    There are a few downsides to physical keyboards, but in my experience, they’re far superior to non-keyboard devices. But what can you do - in the 21st century, practicality never matters, it’s just all about aesthetics and nothing else…


  • Just in case this comment didn’t make it explicitly clear, you can just invoke the python binary inside your venv directly and it will automatically locate all the libraries that are installed in your virtual environment.

    To show how this works, you can look at the sys.path variable to see which paths python will search for modules when you run import statements. Try running python3 -c 'import sys; print(sys.path)' using your system python, and you will only see system python library paths. Then, try running it again after replacing python3 with the full path to the python3 binary in your venv, and you will see an additional entry in the output with the lib directory in your venv, which shows that python will also look there for modules when an import statement is executed.


  • The intent is to allow companies time to implement the change. But if you’ll pardon my cynicism, in practice, what ends up happening is companies just use it as a tactic to delay the implementation and continue recording the revenue.

    At the very least they should forfeit the revenue that they earn during the period for this. I’m not sure exactly how the fines work and whether they take this into account, but I doubt Apple is seriously going to use the 12-month period to actually come clean and change their ways. I think they’ll just use it as more time to come up with some new bullshit form of non-compliance.


  • Excellent news:

    At the heart of Monday’s findings are three elements of Apple’s practices, including fees charged to app developers for every purchase made within seven days of linking out to the commercial app.

    source

    This is, in my opinion, the most egregious non-compliant practice from Apple. They have no reason whatsoever to entitle themselves to purchases made outside their repository just because the software runs on their hardware. It’s also the most asinine set of rules that they established to pretend that they were complying with the DMA.

    It’s a bit disappointing that it will take so long before the fines can be enforced, but I really hope that they get the maximum penalty over this because it’s really the most shockingly brazen breach of the DMA’s terms. In fact, I hope that they get imposed the maximum penalty multiple times - the same article I linked mentions that there are two other DMA investigations being launched into Apple, though I don’t know what grounds those other investigations are looking into.

    And I really hope Apple gets the message loud and clear: they’re gonna start making less money. And this is a good thing. They don’t deserve it, and they were never entitled to it in the first place. This is what happens when you invent new revenue streams that are criminally worthless.


  • Such a sad world we live in. When the internet was hitting the mainstream, virtually everything was standardized. There were RFCs for probably every standard the internet operated on. Email, HTTP, DNS, TCP/UDP/IP, etc.

    Today, we live in a world where we can’t even decide on a fucking chat protocol without making it a proprietary piece of garbage. The internet has been consolidated into giant companies that see interoperability as a weakness that enable their competitors and prevent them from oppressing and exploiting their users.

    A small group of gatekeepers that kill anything nice for their own short-term gains: it is sad but true that it feels like any technology that’s commercially successful will end this way.


  • If I want to make a piece of software to improve people’s lives and I don’t care to do it for free, I’ll choose MIT. If it gets “stolen” by a for-profit corporation it only makes it better, because now my software has reached more people, thus (theoretically) improving their lives.

    I’m not completely sure about this.

    Suppose you write a library that a company like Facebook finds useful. Suppose that they incorporate it into their website. I’m sure I can skip the portion of this post where I extol the harms that Facebook has wrought on society. Do you think your software has improved people’s lives by enabling Facebook to do those sorts of things? They would not have been able to do them if you had used AGPL instead.

    And I don’t want to make it seem like we should never do anything because someone might use the product of our work in a sinister way (because that would quickly devolve into nihilism). If 99 people use it for good and 1 for evil, that’s still a heavy net positive. But at the same time, I would be lying if I didn’t acknowledge that the 1 person using it for evil still would make me feel bad.


  • I was surprised that comment this got so many upvotes, so I’ll respond by saying that, with all due respect, I think your argument is much more fallacious than the one you are trying to debunk.

    The comic author takes one specific case of an MIT licensed product being used in a commercial product, and pits it against another GPL product.

    Yes, this is called an example. In this case, the author is using a particularly egregious case to make a broader conclusion: namely that if you release software under a “do whatever you want” license, it may come back to bite you in the future when it’s used in a product that you don’t like.

    This comic is a warning to developers that choosing MIT/BSD without understanding this fact is a bad choice.

    This ignores situations where MIT is the right answer, where GPL is the wrong one

    It does not ignore those situations. All situations are multifaceted and need to take multiple considerations into account. The author is trying to argue that people should take care not to overlook the particular one to which he is trying to draw attention.

    situations where legal action on GPL violations has failed

    Just because legal efforts have failed does not mean that they are not worthwhile. There may be many cases where people avoided misappropriating GPL software because they did not want to deal with the license - there may be cases where people were less hesitant about doing so with MIT/BSD because they knew this risk was not there.

    From that I conclude that this falls under The Cherry Picking Fallacy. While humorous, it’s a really bad argument.

    Just because the author used a single example does not preclude the existence of others. That is a much more fallacious assumption that invalidates much of your argument.

    and all cases where the author’s intent is considered (Tanenbaum doesn’t mind).

    Just because Tanenbaum didn’t mind does not mean that other developers who mistakenly use MIT/BSD will not either. Also, it honestly shouldn’t matter what Tanenbaum thinks because we don’t know what his rationale is. Maybe he thinks malware is a good thing or that IME is not a serious issue - if that’s the case, do we still consider his sentiments relevant?

    commonly referred to as “cuck licenses”

    This sentiment makes the enclosing sentence an Ad-hominem fallacy

    It does not, in fact. Just because the author used a slang/slanderous term to describe the licenses he doesn’t like does not mean that his logical arguments are invalid. Ad-hominem fallacies are when you say “the person who argued that is $X, therefore his logic is invalid”, not when he uses a term that may be considered in poor taste.

    by attacking the would-be MIT license party as having poor morals and/or low social standing.

    Misrepresentation. The author is not arguing that they have poor morals, he is arguing that they are short-sighted and possibly naive with regards to the implications of choosing MIT/BSD.

    My conclusion: I appreciate the author for making this post. People should be more aware of the fact that your software could be used for nefarious purposes.

    So unless you really don’t care about enabling evil people, you should be defaulting to using GPL. If people really want to use your copyleft software in a proprietary way, then it is easily within their means (and resources) to get an exemption from you. The fact that there is so much non-GPL software out there makes the GPL itself weaker and makes it easier for nefarious interests to operate freely.

    (Not that I would ever release software under GPL myself. I think software licenses are stupid. But no license basically has the same non-derivative limitation as GPL so it doesn’t matter as far as I’m aware.)