• 0 Posts
  • 209 Comments
Joined 2 years ago
cake
Cake day: May 22nd, 2023

help-circle
  • It’s not that far-fetched, PDFs in my opinion are closer to vector graphics than to document formats like odt and docx. They have no understanding of format if not using advanced features, like a table in a PDF is just spaced text with lines between them, and text is just independently placed letters. In fact the space symbol doesn’t exist in most PDFs, it’s just that two letters were spaced further apart. So they basically are multiple canvases that are being painted on with letters, lines, fill areas and even bitmap graphics.

    Modern PDF actually does further in the direction of a document format by providing the content in a structured way, mostly for accessibility, but also for making the format suitable for automatic processing the contained data.


  • I don’t think that’s what’s happening. There’s no hard requirement for cat to read everything straight into memory. It can send data once it’s available, and the receiving process can read it as fast as it wants. There are cases where this might be more clear: Let’s say you have a big video file that you want to convert to something that only supports like y4m input and is not in ffmpeg. A common way is something like ffmpeg -i infile -f yuv4mpegpipe - | encoder --y4m outfile - I’m pretty sure ffmpeg won’t read the whole infile into memory, nor will it store the whole y4m representation in memory. Instead, it will decode infile as necessary and push into the pipe at the speed the encoder can handle.

    But yeah, I remember something about tar using libraries for compression being more efficient that piping its output to a compressor. So it’s still the better route, but probably not as much better as you think.


  • Then those containers or virtual machines should add this or create the home as needed.

    systemd has its own containers, so this is the implementation of that requirement; “virtual machines” might use this exact binary to create home, among other directories like srv and what not. Someone at one point probably said “we always need to create these when spinning up systems, maybe systems can provide a mechanism to do that for us?” and then it was implemented.

    Having/home listed as a tmp file on regular systems is problematic by the nature of what tmpfiles claims it does.

    systemd-tmpfiles claims the following:

    systemd-tmpfiles creates, deletes, and cleans up files and directories, using the configuration file format and location specified in tmpfiles.d(5). Historically, it was designed to manage volatile and temporary files, as the name suggests, but it provides generic file management functionality and can be used to manage any kind of files.

    I rather think having a purge command was the issue here, at the very least it should print a big fat warning at what it does, better even list all affected files and directories. There’s no reason a normal user needs this and with the name of the binary, it’s totally misleading, which is an issue in these situations.


  • E.g. for quick provisioning of containers or virtual machines, this is also to make sure the required directories always exist. In a normal distribution, /home already exists, so systemd-tmpfiles does nothing, but there are cases where you want to setup a standard directory structure and this is a declarative alternative to scripts with a lot of mkdir, chmod and chown.

    The name systemd-tmpfiles is kind of historic at this point, but wasn’t changed due to backwards compatibility and all.




  • Alright, not that I wrote or implied that anywhere… In fact Java was probably the whole reason Oracle bought Sun to gain leverage over Android. Which fits very much into what I wrote - one company innovates, another one buys them to squeeze users (Google wasn’t a customer of Sun, they used their own implementation which wasn’t exactly Java but also not exactly anything else). Just that Sun by all means wasn’t a small company, I mean they controlled almost a full stack with their own processors (SPARC), workstations and servers (Blade was somewhat famous), an operating system with Solaris (and if you want to count it even JavaOS) and Java on top of those, and they contributed a lot of technology like NFS, ZFS (license discussions aside). On the other hand, when they bought someone, the product wasn’t just milked to death, but actually integrated into their stack and continued to be developed in the open.

    Shame it turned out that way, I guess Sun was a bit overleveraged with how much they did vs. how much they made from it. And to think that Oracle paid less than a fifth than what Twitter sold for later for all of that technology to go to waste, just for a chance to sue Google… But we long as suits continue to license their stuff because they have cool advertisements at airports, this will keep going.


  • Oracle was never really innovative on a technical level , it’s first and foremost a company focused on selling licenses, and they’re really innovative in that regard but if you fall for that as a company, I have no pity, this is their whole schtick.

    Big companies in general are often rather conservative in nature while innovation happens on smaller scale and later expands.

    The big problem is rather that a lot of innovation has been absorbed by the big companies via buyouts, especially when money was cheap to borrow. Innovation bears risk, buying an established solution and milking existing users much less so.

    I don’t think the users are without blame. A lot of people ignore the red flags when a solution is just convenient enough (we need the commercial support / this exactly covers our use case so we don’t have to hire someone to adapt it / …) and the vendor then cashes out when moving away from his solution would be really expensive.

    I think there’s still a lot of innovation lately, but a lot people are just looking for the next big thing that does everything it feels like.










  • If you actually try to understand what’s happening, I think it’s one of the best ways to learn how a system is composed, at least if you install manually. What’s a partition, file system, what does mounting do, chroots, you name it.

    I don’t use Arch anymore but still think it’s a great distro to learn the basics while still having the luxury of new binary packages. Manual Arch install abstracts basically nothing away from you, for better or for worse.

    Currently on NixOS, I’d say while its engineering is better overall, the things you learn there are much more distribution-specific or maybe concept-specific and often not applicable to other distributions.

    I guess there are also probably ways to install e.g. Debian manually, I’ve never seen instructions for it though as there was always the focus on the installer, and frankly I’m not a big fan of apt and all. It always seemed to be much more convoluted than pacman plus it does a lot of stuff for you, whether you want it or not was my impression.