• 3 Posts
  • 186 Comments
Joined 1 year ago
cake
Cake day: November 24th, 2023

help-circle

  • It’s quite simple. Just remove the permalink field! If you are calculating it then no need to store it in the struct.

    If you do need the field to be there (ex you serialise it with serde), then create a method called new that takes everything but the permalink, construct your permalink there, then return a new object.

    Your permalink method can now just be self.permalink.to_string()

    P.S. in the second case I’d recommend to change the return type of self.permalink() to &str. It avoids unnecessary cloning of the string.









  • I’d say you might have had a snapshot still holding the deleted data when you first deleted the cache. I don’t use time shift for my backups but I’d assume it uses the same kind of incremental snapshot as btrbk. Which means that, until the next backup date, it will hold onto the previous state of the system, preventing it from truly deleting the file.

    You may also have some balance issues, having way more metadata allocation than needed. Try running a balance and see if it changes something.



  • There’s a difference between helping people with misunderstanding a tool and belittling them for being wrong. It’s just a matter of wording that separate an helpful answer from a toxic one

    I could tell you “You should actually use Y instead of X. They are numerous benefits like A, B and C. The doc actually have a great example you may have missed or not understood it was for this purpose. It will help you a lot more than what you are thinking of doing.” And this would be fine.

    But “Just use Y. X is bad because Y is made for that. You not willing to use Y shouldn’t make you do X. There’s even a the first Google link on how to do it” isn’t fine.

    And I have not belittled them at all. I have said that it wasn’t what I was looking for. A lot of times people post questions they think should solve their issue, but only to realise that they didn’t fully understand the full picture and theirs problem is on a larger scale.





  • RustyNova@lemmy.worldOPtoSelfhosted@lemmy.worldRestart an OOM killed docker automatically
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    edit-2
    4 months ago

    Alright, sorry for calling it a “bandaid fix”. It wasn’t just the right term for what I wanted to say. I was more referring on how it would only fix issues in cases of builds, and not on actual runtime, which can also be an issue if I am not careful. So yeah, it’s the fix for the issue in the post, but this solution made me realise that this isn’t the only thing I want.

    But the second part is… Just chill. It’s a home server. Not a high availability cluster. I can afford stupid things. Heck, I’m only asking this question because I got stupid and haven’t limited the job count of a cargo build, downing my server. I don’t care that my build crash. I just want to not have to manually restart it, because when I’m not here I can’t do it.

    As for the link that you sent, it’s container limitations, not image building limitations. And I already have setup some on my most hungry container, stats shown that it blew past it, so idk what’s going on there.

    Edit: NVM. This is a bandaid fix. What if you forgot to put the flag? Like it’s been 5 month since last time and forgot to do the same fix? Or you accidentally removed it while editing the command? I’m actually looking for a solution that fixed my problem fully, not a partial solution