I guess that wouldn’t help the deaf people though. (:
I guess that wouldn’t help the deaf people though. (:
Most data can be de-anonymized with some clever tricks. I don’t know about Mozilla but the others definitely try to keep it just anonymous enough to later be correlated with the rest of your profile.
Edit: typos
No any self hosted isn’t on the radar. By big, they mean the centralized giants, i.e. Meta, Google, Telegram, Signal(?) etc.
Force all the big platforms to share their encrypted data. Banning end-to-end encryption. It’s all very stupid and will never actually catch any bad guys.
They are all ML. I don’t know how to convince you of this so I give up. Bye. I have a Master’s degree in Machine Learning, btw.
Yea, this bubble is mostly LLM, but also deepfakes and other generative image algorithms. They are all ML. LLM has some fame because people can’t seem to realise that it’s crap. They definitely passed the Turing test, while still being pretty much useless.
There are many other useless ML algorithms. Just because you don’t like something doesn’t mean it doesn’t belong. ML has some good stuff and some bad stuff. The statement “ML works” doesn’t mean anything. It’s like saying “math works”.
There have been many AI bubbles in the past as well, as well as slumps. Look up the term AI winter. Most AI algorithms turn out not really working except for a few niche applications. You are probably referring to these few as “ML works”. Most AI projects fail, but some prevail. This goes for all tech though. So… tech works.
What Microsoft is doing is they are trying to cast a wide net to see if they hit one of the few actual good applications for LLMs. Most of them will fail but there might be one or two really successful products. Good for them to have that kind of capital to just haphazardly try new features everywhere.
Companies hve always simplified smart things and called it AI. AI is hotter than ever now, not only LLM.
And again ML is a subset of AI, LLM is a subset of ML. With these definitions, everything is AI. Look up the definition of AI. It’s just a collection of techniques to do “smarter” things with AI. It includes all of the above, e.g. “If this then that” but also more advanced mathematics, like statistical methods and ML. LLM is one of those statistical models.
Let’s instead make an honest attempt to de-poison the term, rather rhat just giving in. It is indeed like saying “All math bad” because math can be used in bad ways.
You seem to be arguing against another stawman. OP didn’t say they only dislike LLM the sub is even “Fuck AI”. And this thread is talking about AI in general.
Machine Learning is a subset of AI and has always been. Also, LLM is a subset of Machine Learning. You are trying to split hairs, or at least do a “That’s not a real Scotsman” out of the above post.
Maybe competitors are only up to 200 per year and these guys finally achieved 300 per year?
This will lead to change fatigue. People will rather not cleanup as they go anymore and just get the work done, with worse and worse code quality as a result.
Bank holidays would be really awkward. You start wort at 23 and the next day is off so you would just have to work that one hour.
Office workers could probably move hours around. It would get complicated for shift workers though. Paying overtime for work on holidays?
That’s not true at all, mathematically. That’s why we have a measurement for co-variance or correlation. If two dimensions are 100 correlation, they can most definitely be reduced to one.
To be fair, most of these apps were made before the notification categories were invented and they don’t keep the consultants that made the initial app, or want to pay for the change.
Agreed but someone actually tried it - did the research.
That’s just what we call people spending some time to figure something out. Security research is basically just trying to learn the technology and then trying to break it.
No thanks. It’s way more fun to be part of the decision process. If a manager can anticipate all of the requirements and quirks of the project before it even starts, it’s probably going to be a really boring, vanilla project at which point it’s probably just better to but the software.ä somewhere else.
Creating something new is an art in itself. Why would you not want to be a part of that?
Also: Isn’t it cheating to compare the two approaches when one of them is defined as having all the planning “outside” of the project scope? I would bet that the statistics in this report disregard ll those projects that died in the planning phase, leaving only the almost completed, easy project to succeed at a high rate.
It would be interesting to also compare the time/resources spent before each project died. My hunch is that for failed agile project, less total investment has been made before killing it off, as compared to front loading all of that project planning before the decision is made not to continue.
Complementary to this, I also think that Agile can have a tendency to keep alive projects that should have failed on the planning stage. “We do things not because they are easy, but we thought they would be easy”. Underestimating happens for all project but for Agile, there should be a higher tendency to keep going because “we’re almost done”, forever.
Plus, the news of this would already be priced into the stock, so if anything the price is already low and these companies would need to pivot their business (which would increase the value again) or die (which would lower the price marginally, to zero). Either way, shorting is a bad strategy in this case.
That’s the usual open source way. The config probably came later so they just added the option without changing the default because that would break backward compatibility.
And there would be too much boring work to build a migration.