Hell, I’ll take someone who wants to be a billionaire, as long as they do it without exploitation. It’s just that that’s nearly impossible to do, since very few people actually individually create a billion dollars worth of value.
Hell, I’ll take someone who wants to be a billionaire, as long as they do it without exploitation. It’s just that that’s nearly impossible to do, since very few people actually individually create a billion dollars worth of value.
Look at their actions, not their words specifically.
It’s a culture where being unkind is particularly unacceptable, not specifically where you’re not allowed to be honest or forthright.
You’re allowed to not like someone, but telling someone you dislike them is needlessly unkind, so you just politely decline to interact with them.
You’d “hate to intrude”, or “be a bother”. If it’s pushed, you’ll “consider it and let them know”.
Negative things just have to be conveyed in the kindest way possible, not that they can’t be conveyed.
Brian Acton is the only billionaire I can think of that hasn’t been a net negative.
Co-founded WhatsApp, which became popular with few employees. Sold the service at a reasonable rate.
Sold the business for a stupid large sum of money, and generously compensated employees as part of the buyout.
Left the buying company, Facebook, rather than do actions he considered unethical, at great personal expense ($800M).
Proceeded to cofound signal, which is an open, and privacy focused messaging system which he has basically bankrolled while it finds financial stability.
He also has been steadily giving away most of his money to charitable causes.
Billionaires are bad because they get that way by exploiting some combination of workers, customers or society.
In the extremely unlikely circumstance where a handful of people make something fairly priced that nearly everybody wants, and then uses the wealth for good, there’s nothing intrinsically wrong with being that person.
Selling messaging to a few billion people for $1 a lifetime is a way to do that.
Er, selinux was released nearly a decade before Windows 7, and was integrated into mainline just a few years later, even before vista added UAC.
Big difference between “not available” and “often not enabled”.
That might just be a growing up near water thing. I think that on average, Canadians live closer to larger bodies of water than Americans do, since more than half are within day trip distance of the great lakes waterway, and then there’s Halifax and Vancouver.
Growing up in a place with water, basically everyone I know also has at least a passing knowledge of recreational small watercraft.
Where I live basically every location is some combination of “French, native American, English, Scandinavian”, “pronounced natively or not”, and “spelled like it’s pronounced or not”.
The fun ones are the English pronunciation of the French transliteration of the native word.
Yes, but that’s the case regardless. My message going through has always depended on someone else’s cell towers, all the random routers and switches in between, and the other person’s device.
My server likely has worse uptime, and if I’m hosting from home it probably has more hops to transit through it.
I believe their point was that even encrypted messages convey data. So if you have a record of all the encrypted messages, you can still tell who was talking, when they were talking, and approximately how much they said, even if you can’t read the messages.
If you wait until someone is gone and then loudly raid their house, you don’t need to read their messages to guess the content of what they send to people as soon as they find out. Now you know who else you want to target, despite not being able to read a single message.
This type of metadata analysis is able to reveal a lot about what’s being communicated. It’s why private communication should be ephemeral, so that only what’s directly intercepted can be scrutinized.
In this case however, Janelle Shane is actually quite well aware of how different types of AI works. She writes about them, how they work and their various limitations.
Her blog is just focused on cases of them acting oddly, or not how you would expect , or just “funny”.
This is already a thing we need to deal with, security wise. An application making use of encryption doesn’t know the condition of what it views as ram, and it could very well be transferred to a durable medium due to memory pressure. Same thing with hibernation as opposed to suspension.
Depending on your application and how sensitive it is, there are different steps you can take to deal with stuff like that.
In this case the helicopter came because they blocked a major highway.
A helicopter coordinating police movements during civil unrest is pretty standard anyplace that can afford helicopters. That’s definitely not just an American thing.
Do you think France is eschewing using helicopters to coordinate police movements with their current unrest?
Is it? All I saw was a helicopter with decent optics, but nothing particularly special, and cops talking on low bandwidth radios.
Even when we get to actual behavior, we see the cops starting with the assumption that they’ll be just telling people to leave and planning routes to do so, before it changes to arresting people for blocking a freeway. They make sure people are notified that they’re under arrest early, and the make sure they have adequate transportation before they begin the arrest process.
Like, there’s plenty of scary and shitty things cops do, but this wasn’t one of them.
To me it’s important to ask “what problem is it solving”, and “how did we solve that problem in the past”, and “what does it cost”.
Crypto currency solves the problem of spending being tracked by a third party. We used to handle this by giving each other paper. The new way involves more time, and a stupendous amount of wasted electricity.
Nfts solve the problem of owning a digital asset. We used to solve this by writing down who owned it. The cost is a longer time investment, and a stupendous amount of wasted electricity.
Generative AI is solving the problem of creative content being hard to produce, and expensive. We used to solve this problem by paying people to make things for us, and not making things if you don’t have money. The cost is pissing off creatives.
The first two feel like cases where the previous solution wasn’t really bad, and so the cost isn’t worth it.
The generative AI case feels mixed, because pissing off creatives to make more profit feels shitty, but lowering barriers to entry to creativity doesn’t.
Depends on your level of security consciousness. If you’re relying on security identifiers or apis that need an “intact” system, it certainly can be a security issue if you can’t rely of those.
That being said, it’s not exactly a plausible risk for most people or apps.
Sure, I suppose. Or just don’t expand the system until there’s some measure of system in place to keep the AI cars from fucking around in emergency situations.
Some of the vehicles don’t have anyone in them.
https://missionlocal.org/2023/05/waymo-cruise-fire-department-police-san-francisco/
One of the incidents in question.
Big difference is that a human can be yelled at and told what to do, and we currently don’t have a good way for someone to do that with an autonomous vehicle.
It’s not nearly as nefarious as people seem to think. Effectively all applications that access web resources send along what they are and basic platform information.
This is part of how the application asks for content in a way that it can handle
It does a little to let you be tracked, but there are other techniques that are far more reliable for that purpose.
I don’t think they work the same way, but I think they work in ways that are close enough in function that they can be treated the same for the purposes of this conversation.
Pen and pencil are “the same”, and either of those and printed paper are “basically the same”.
The relationship between a typical modern AI system and the human mind is like that between a pencil written document and a word document: entirely dissimilar in essentially every way, except for the central issue of the discussion, namely as a means to convey the written word.
Both the human mind and a modern AI take in input data, and extract relationships and correlations from that data and store those patterns in a batched fashion with other data.
Some data is stored with a lot of weight, which is why I can quote a movie at you, and the AI can produce a watermark: they’ve been used as inputs a lot. Likewise, the AI can’t perfectly recreate those watermarks and I can’t tell you every detail from the scene: only the important bits are extracted. Less important details are too intermingled with data from other sources to be extracted with high fidelity.
Changes the torque and the application of said torque for each bolt. As in “tool head has 5° of give until in place, then in ramps torque to 5nM over half a second, and holds for 1 second and then ramps to zero over .1 seconds”, and then something different for the next bolt. Then it logs that it did this for each bolt.
The tool can also be used to measure and correct the bolts as part of an inspection phase, and log the results of that inspection.
Finally, it tracks usage of the tool and can log that it needs maintenance or isn’t working correctly even if it’s just a subtle failure.