• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle


  • I actually disagree. I only know a little of Crowdstrike internals but they’re a company that is trying to do the whole DevOps/agile bullshit the right way. Unfortunately they’ve undermined the practice for the rest of us working for dinosaurs trying to catch up.

    Crowdstrike’s problem wasn’t a quality escape; that’ll always happen eventually. Their problem was with their rollout processes.

    There shouldn’t have been a circumstance where the same code got delivered worldwide in the course of a day. If you were sane you’d canary it at first and exponentially increase rollout from thereon. Any initial error should have meant a halt in further deployments.

    Canary isn’t the only way to solve it, by the way. Just an easy fix in this case.

    Unfortunately what is likely to happen is that they’ll find the poor engineer that made the commit that led to this and fire them as a scapegoat, instead of inspecting the culture and processes that allowed it to happen and fixing those.

    People fuck up and make mistakes. If you don’t expect that in your business you’re doing it wrong. This is not to say you shouldn’t trust people; if they work at your company you should assume they are competent and have good intent. The guard rails are there to prevent mistakes, not bad/incompetent actors. It just so happens they often catch the latter.


  • I agree with most of these but there’s another missing benefit. A lot of the time my colleagues will be iterating on a PR so commits of “fuck, that didn’t work, maybe this” are common.

    I like meaningful commit messages. IMO “fixed the thing” is never good enough. I want to know your intent when I’m doing a blame in 18 months time. However, I don’t expect anyone’s in progress work to be good before it hits main. You don’t want those comments in the final merge, but a squash or rebase is an easy way to rectify that.



  • Honestly, these days I have no idea. When I said “wouldn’t recommend” that wasn’t an assertion to avoid; just a lack of opinion. Most of my recent experience is with Cloud vendors wherein the problem domain is quite different.

    I’ve had experience with most of the big vendors and they’ve all had quirks etc. that you just have to deal with. Fundamentally it’ll come down to a combination of price, support requirements, and internal competence with the kit. (Don’t undermine the last item; it’s far better if you can fix problems yourself.)

    Personally I’d actually argue that most corporates could get by with a GNU/Linux VM (or two) for most of their routing and firewalling and it would absolutely be good enough; functionally you can do the same and more. That’s not to say dedicated machines for the task aren’t valuable but I’d say it’s the exception rather than rule that you need ASICs and the like.






  • I can potentially see that scenario if your transit provider is giving you a dynamic prefix but I’ve never seen that in practice. The address space is so enormous there is no reason to.

    Otherwise with either of RADVD or DHCPv6 the local routers should still be able to handle the traffic.

    My home internal network (v6, SLAAC) with all publicly routeable addresses doesn’t break if I unplug my modem.








  • I used Netscape “back in the day”. With some interim transition attempts including the likes of Opera, I eventually switched to Chrome because it was genuinely more featureful and faster.

    I was a happy Chrome user until they decided to deprecate manifest V2 and fuck up my ad blocker, at which point I switched to Firefox and haven’t looked back.

    Everything in this industry is circular I guess.