Yeah, actually moderating an online space with even modest activity is fucking hard and takes a shitton of time.
I think a lot of people underestimate the effort involved and quickly lose interest once it becomes apparent.
Formerly /u/Zalack on Reddit.e
Also Zalack@kbin.social
Yeah, actually moderating an online space with even modest activity is fucking hard and takes a shitton of time.
I think a lot of people underestimate the effort involved and quickly lose interest once it becomes apparent.
That’s a really interesting perspective I didn’t think I’ve seen before. Thanks for posting.
Formal licensing could be about things that are language agnostic. How to properly use tests to guard against regressions, how to handle error states safely.
How do you design programs for critical systems that CANNOT fail, like pace makers? How do you guard against crashes? What sort of redundancy do you need in your software?
How do you best design error messages to tell an operator how to fix the issue? Especially in critical systems like a plane, how do you guard against that operator doing the wrong thing? I’m thinking of the DreamLiner incidents where the pilots’ natural inclination was to grab the yoke and pull up, which unknowingly fought the autopilot and caused the plane to stall. My understanding was that the error message that triggered during those crashes was also extremely opaque and added further confusion in a life-and-death situation.
When do you have an ethical responsibility not to ship code? Just for physical safety? What about Dark Patterns? How do you recognize them and do you have an ethical responsibility to refuse implementation? Should your accreditation as an engineer rely on that refusal, giving you systemic external support when you do so?
None of that is impacted by what tech stack you are using. They all come down to generic logical and ethical reasoning.
Lastly, under certain circumstances, Civil engineers can be held personally liable for negligence when their bridge fails and people die. If we are going to call ourselves “engineers”, we should bear the same responsibility. Obviously not every software developer needs to have such high standards, but that’s why software engineer should mean something.
I know I learned it in high school at one point but definitely isn’t something I would have been able to recall on my own.
My experience has often been the opposite. Programmers will do a lot to avoid the ethical implications of their works being used maliciously and discussions of what responsibility we bear for how our work gets used and how much effort we should be obligated to make towards defending against malicious use.
It’s why I kind of wish that “engineer” was a regulated title in America like it is in other countries, and getting certified as a programming engineer required some amount of training in programming ethics and standards.
In many cases it should be fine to point them all at the same server. You’ll just need to make sure there aren’t any collisions between schema/table names.
I’m not saying there aren’t downsides, just that it isn’t a totally crazy strategy.
Same. I write FOSS software in my free time and also paid.
Man, I really think you should either saddle up, don’t block ads, or use a free, non-ad-supported alternative.
Sync is made by a single dev who uses it as his main source of income. It’s not made by a corporation. Taking the fruits of someone’s labor, that they have priced to make it worth their time, feels kinda shitty to me.
If you really feel it’s so much better than the alternatives that you won’t even use them, then pay what the person making it feels they need to keep making it.
You’re being sarcastic but even small fees immediately weed out a ton of cruft.
Sorry you’re right that I wasn’t being precise with my terminology. It’s not a DDOS but it could be used to slow down targeted features, take up some HTTP connections, inflate the target’s DB, and waste CPU cycles, so it shares some characteristics of one.
In general, you want to be very very careful of implementing features that allow untrusted parties to supply potentially unbounded resources to your server.
And yeah, it would be trivial to write a set of scripts that pretend to be a lemmy instance and supply an endless number of fake communities to the target server. The nice thing about this attack vector is that it’s also not bound by the normal rate limiting since it’s the target server making the requests. There are definitely a bunch of ways lemmy could mitigate such an attack, but the current approach of “list communities current users are subscribed to” seems like a decent first approach.
I like the idea of calling it “Known Network” and “Local”
Federation isn’t opt-in though. It would be VERY easy to spin up a bunch of instances with millions or billions of fake communities and use them to DDOS a server’s search function.
Searching current active subscriptions helps mitigate that vector a little.
Lol, Texas and Florida are doing a good enough job of knocking themselves down without help from me.
Except in a true free market zoning laws wouldn’t keep adorable, high density housing from being constructed to artificially boost housing prices.
Other than that I agree with you.
I agree with the other poster that you need to define what you even mean when you say free will. IMO, strict determinism is not incompatible with free will. It only provides the mechanism. I posted this in another thread where this came up:
The implications of quantum mechanics just reframes what it means to not have free will.
In classical physics, given the exact same setup you make the exact same choice every time.
In Quantum mechanics, given the same exact setup, you make the same choice some percentage of the time.
One is you being an automaton while the other is you being a flipped coin. Neither of those really feel like free will.
Except.
We are looking at this through an implied assumption that the brain is some mechanism, separate from “us”, which we are forced to think “through”. That the mechanisms of the brain are somehow distorting or restricting what the underlying self can do.
But there is no deeper “self”. We are the brain. We are the chemical cascade bouncing around through the neurons. We are the kinetic billiard balls of classical physics and the probability curves of quantum mechanics. It doesn’t matter if the universe is deterministic and we would always have the same response to the same input or if it’s statistical and we just have a baked “likelihood” of that response.
The way we respond or the biases that inform that likelihood is still us making a choice, because we are that underlying mechanism. Whether it’s deterministic or not it’s just an implementation detail of free will, not a counterargument.
And often if you box yourself into an API before you start implementing, it comes out worse.
I always learn a lot about the problem space once I start coding, and use that knowledge to refine the API of my system as I work.
This reminded me of an old joke:
Two economists are walking down the street with their friend when they come across a fresh, streaming pile of dog shit. The first economist jokingly tells the other “I’ll give you a million dollars if you eat that pile of dog shit”. To his surprise, the second economist grabs it off the ground and eats it without hesitation. A deal is a deal so the first economist hands over a million dollars.
A few minutes later they come across a second pile of shit. The second economist, wanting to give his peer a taste of his own medicine, says he’ll give the first economist a million dollars if he eats it. The first economist agrees and does so, winning him a million dollars.
Their friend, rather confused, asks what the point of all this was, the first economist gave the second economist a million dollars, and then the second economist gave it right back. All they’ve accomplished is to eat two piles of shit.
The two economists look rather taken aback. “Well sure,” they say, “but we’ve grown the economy by two million dollars!”
I think it depends on the project. Some projects are the author’s personal tools that they’ve put online in the off-chance it will be useful to others, not projects they are really trying to promote.
I don’t think we should expect that authors of repos go too out of their way in those cases as the alternative would just be not to publish them at all.