• 2 Posts
  • 45 Comments
Joined 3 years ago
cake
Cake day: November 29th, 2021

help-circle

  • I’m curious about this. The source text of your comment appears that your comment was just the URL with no markdown. For your comment about a markdown parsing bug to be true, shouldn’t the URL have been written in markdown with []() notation (or a space between the URL and the period) since a period is a valid URL character? For example, instead of typing https://google.github.io/styleguide/cppguide.html., should [https://google.github.io/styleguide/cppguide.html.](https://google.github.io/styleguide/cppguide.html) have been typed?


  • Yes, I am using PersistentVolumes. I have played around with different tools that have backup/snapshot abilities, but I haven’t seen a way to integrate that functionality with a CD tool. I’m sure if I spent enough time working through things, I may be able to put together something that allows the CD tool to take a snapshot. However, I think that having it handle rollbacks would be a bit too much for me to handle without assistance.


  • Thanks for the reply! I am currently looking to do this for a Kubernetes cluster running various services to more reliably (and frequently) perform upgrades with automated rollbacks when necessary. At some point in the future, it may include services I am developing, but at the moment that is not the intended use case.

    I am not currently familiar enough with the CI/CD pipeline (currently Renovatebot and ArgoCD) to reliably accomplish automated rollbacks, but I believe I can get everything working with the exception of rolling back a data backup (especially for upgrades that contain backwards incompatible database changes). In terms of storage, I am open to using various selfhosted services/platforms even if it means drastically changing the setup (eg - moving from TrueNAS to Longhorn, moving from Ceph to Proxmox, etc.) if it means I can accomplish this without a noticeable performance degradation to any of the services.

    I understand that it can be challenging (or maybe impossible) to reliably generate backups while the services are running. I also understand that the best way to do this for databases would be to stop the service and perform a database dump. However, I’m not too concerned with losing <10 seconds of data (or however long the backup jobs take) if the backups can be performed in a way that does not result in corrupted data. Realistically, the most common use cases for the rollbacks would be invalid Kubernetes resources/application configuration as a result of the upgrade or the removal/change of a feature that I depend on.





  • Everything I mentioned works for LAN services as long as you have a domain name. You shouldn’t even need to point the domain name to any IP addresses to get it working. As long as you use a domain registrar that respects your privacy appropriately, you should be able to set things up with a good amount of privacy.

    Yes, you can do wildcard certificates through Let’s Encrypt. If you use one of the reverse proxies I mentioned, the reverse proxy will create the wildcard certificates and maintain them for you. However, you will likely need to use a DNS challenge. Doing so isn’t necessarily difficult. You will likely need to generate an API key or something similar at the domain registrar or DNS service you’re using. The process will likely vary depending on what DNS service/company you are using.


  • Congrats on getting everything working - it looks great!

    One piece of (unprovoked, potentially unwanted) advice is to setup SSL. I know you’re running your services behind Wireguard so there isn’t too much of a security concern running your services on HTTP. However, as the number of your services or users (family, friends, etc.) increases, you’re more likely to run into issues with services not running on HTTPS.

    The creation and renewal of SSL certificates can be done for free (assuming you have a domain name already) and automatically with certain reverse proxy services like NGINXProxyManager or Traefik, which can both be run in Docker. If you set everything up with a wildcard certificate via DNS challenge, you can still keep the services you run hidden from people scanning DNS records on your domain (ie people won’t know that an SSL certificate was issued for immich.your.domain). How you set up the DNS challenge will vary by the DNS provider and reverse proxy service, but the only additional thing that you will likely need to set up a wildcard challenge, regardless of which services you use, is an email address (again, assuming you have a domain name).




  • the only sites I give permenant cookie exception are my selfhosted services

    This is what I was referring to. How are you accomplishing this?

    I’m still looking for the switches to block all new requests asking to access microphone, location, notification

    I can’t help with this at the moment, but if you’re still struggling with this I can provide the lines required to disable these items. However, I don’t know how to do this with exceptions (ie allowing your self hosted sites to use that functionality, but block all other sites). At minimum though you could require Firefox to ask you every time a site wants to use something. This may get repetitive for things like your self hosted sites if you have everything clearing when you exit Firefox.


  • Didn’t look at the repo thoroughly, but I can appreciate the work that went into this.

    • Is there any reason you went this route instead of just using an user-overrides.js file for the standard arkenfox user.js file?
    • Does the automatic dark theme require enabling any fingerprintable settings (beyond just possobly determining the theme of the OS/browser)?
    • How are you handling exceptions for sites? I assumed it would be in the user.js file, but didn’t notice anything in particular handling specific URLs differently.



  • rhymepurple@lemmy.mltoPrivacy@lemmy.mlHow good/bad is Firefox sync.
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    I’m still not sure what point you are trying to make. Your initial claim was:

    Although Mozilla encrypts the synced data, the necessary account data is shared and used by Google to track those.

    @utopiah@lemmy.ml asked:

    Are you saying Firefox shares data to Alphabet beyond Google as the default search engine? If so and if it applies to Sync (as if the question from OP here) can you please share sources for that?

    You stated:

    Mozilla does, sharing your account data

    You also provided evidence that Mozilla uses Google Analytics trackers on the Firefox’s product information website. I mentioned that it’s not sufficient evidence of your claim as the trackers are independent of Firefox the browser and Sync. Additionally, the use of trackers for websites is clearly identified on Mozilla’s Privacy Policies and there is not much else mentioned on the Privacy Policies outside of those trackers and Google’s geolocation services in Firefox.

    You’ve also mentioned Google’s contract with Mozilla, which is controversial for many people, but isn’t evidence of Mozilla providing user data to Google even in conjunction with the previously mentioned trackers. You then discussed various other browsers, but I’m not sure how that is relevant to your initial claim.

    While it seems we can both agree that Mozilla and it’s products are far from perfect, it is looking like your initial claim was baseless as you have yet to provide any evidence of your initial claim. Do you have any evidence through things like code reviews or packet inspections of Firefox or Sync that hints Mozilla is sharing additional information to Google? At this point, I would even accept a user(s) providing evidence of some weird behavior like the recent issue where google.com wouldn’t load in Firefox on Android if someone could find a way to connect the weird behavior to Mozilla sharing data with Google.


  • I don’t understand what point you are trying to make. Mozilla has several privacy policies that cover its various products and services which all seem to follow Mozilla’s Privacy Principles and Mozilla’s overarching Privacy Policy. Mozilla also has documentation regarding data collection.

    The analytics trackers that you mentioned would fall under Mozilla’s Websites Privacy Policy, which does state that it uses Google Analytics and can be easily verified a number of ways such as the services you previously listed.

    However, Firefox sync uses https://accounts.firefox.com/ which has its own Privacy Policy. There is some confusion around “Firefox Accounts” as it was rebranded to “Mozilla Accounts”, which again has its own Privacy Policy. There is no indication that data covered by those policies are shared with Google. If Google Analytics trackers on Mozilla’s website are still a concern for these services, you can verify that the Firefox Accounts and Mozilla Accounts URLs do not contain any Google Analytics trackers.

    Firefox has a Privacy Policy as well. Firefox’s Privacy Policy has sections for both Mozilla Accounts and Sync. Neither of which indicate that data is shared with Google. Additionally, the data stored via the Sync service is encrypted. However, there is some telemetry data that Mozilla collects regarding Sync and more information about it can be found on Mozilla’s documentation about telemetry for Sync.

    The only thing that I could find about Firefox, Sync, or Firefox Accounts/Mozilla Accounts sharing data with Google was for location services within Firefox. While it would be nice for Firefox not to use Google’s geolocation services, it is a reasonable concession and can be disabled.

    Mozilla is most definitely not a perfect company, even when it comes to privacy. Even Firefox has been caught with some privacy issues relatively recently with the unique installation ID.

    Again, I’m not saying that Mozilla is doing nothing wrong. I am saying that your “evidence” that Mozilla is sharing Firefox, Sync, or Firefox Accounts/Mozilla Accounts data with Google because of Google Analytics trackers on some of Mozilla’s websites is coincidental at best. Without additional evidence, it is misleading or flat out wrong.





  • Alerts, notifications, person recognition, object recognition, motion detection, two way audio, automated lights, event based video storage, 24/7 video storage, automated deletion of stale recorded video, and more can all be accomplished 100% locally.

    Granted, much of this functionality is not easily accomplished without some technical knowledge and additional hardware. However, these posts typically are made by people who state to at least have an interest in making that a reality (as this one does).

    What security benefits does a cloud service provide?


  • Your options will depend on how much effort you are willing to put in and what other services you have access to (or are willing to run).

    For example, do you have a Network Video Recorder (NVR) or something like Home Assistant that can consume a Real-Time Messaging Protocol (RTMP) or Real Time Streaming Protocol (RTSP) video feed? Can you modify your network to block all internet traffic to/from the doorbell? Are you comfortable using a closed source, proprietary app to setup the doorbell? Is creating your own doorbell feasible?

    I’m not aware of a doorbell that you can buy which meets all of your requirements without at least one of the items I mentioned above. Additionally, I believe the only doorbell that meets all your requirements is building your own doorbell. However, some other brands that will get close to meeting your requirements are Reolink and Amcrest.