• 1 Post
  • 438 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • You can see their strategy at work here.

    It is possible to keep individual files on the local hard drive with different settings (that in my experience never seem to stick past updates).

    The default, though, is to take everything on your computer off of your computer, put it into the cloud (their computer), and recommend you pick and choose which ones stay on your computer. In essence, they want you to think of your computer as secondary to their computer. An extension of it.

    There is no “your computer”, it’s just the computer you happen to be logged into at the moment.

    The cloud is not something you take advantage of, the cloud is where you live now.



  • At least that can be turned off in updates.

    All these hardware additions, the AI buttons, even Windows taking away the right CTRL key for Copilot, are ugly appendages that, in 20 years, when were clearing out the equipment closest, we’ll see some of these and go “oh yeah remember when that bullshit was as a thing for a few years?”







  • I’ve seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don’t need to.

    And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn’t making them dumber. They were already dumb. What the AI does is make code they don’t understand more accessible, which is to say, it’s just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.



  • So it’s helpful for saving time typing some stuff

    Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.

    And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.

    At the end of the day, we’ve had this technology for a while, it’s just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.


  • Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I’m going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you’re an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

    This aspect of it isn’t getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn’t devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there’s actual people working round the clock pruning and curating it.

    You know, humans, with actual human intelligence, without which these miraculous “artificial intelligence” tools would not work as they seem to.

    If the "AI’ needs a human support team to keep it “intelligent”, it’s less AI and more a really fancy kind of puppet.





  • Because the investors/stockholders in the tech industry started tightening the belt and demanding profitability from these huge tech companies. What’s happening at Google is happening everywhere: the avenues for extracting more profit from their apps or services are being scoured and taken advantage of. Prices going up, advertising increasing, free features removed, etc. Different strategies all around, but the pattern is clear.

    YouTube has never been profitable, but Google was ok with letting the rest of the profits from its other divisions subsidize YouTube’s losses so it could remain free. They did that to choke the market; no other company could handle the sheer scale of it while offering it for free. As long as Google ran YouTube for free with relatively few ads, no competition could ever possibly come to exist.

    But because the shareholders are demanding profit now, and because Google itself is struggling on multiple fronts, the time to force YouTube into a profitable enterprise has come at last.

    And this is what it looks like.

    As for risking competition, at this point, I don’t think they care anymore. Competition in the web service and software space seems to be a thing of the past. Users are intransigent, algorithms favor the oldest and most popular services, and content creators seem to be incapable of separating themselves from their abusive platforms.

    I also have a theory that Google is using YouTube as a way of rallying all platforms and services to combat ad blockers more fiercely. If they can beat them on YouTube, other sites will dig their heels in. There’s a long-term strategy here to nuke ad blocking permanently. That’s what that web environment integrity shit was about, and you better believe that will be back with a new name.


  • That’s kind of what I’m thinking too.

    Legitimately, the degree to which proton advertises, the sheer amount of blog spam and such, made me very, very resistant to it. I really don’t care how private it all is or how well it works, I have spent enough time on the internet and engaged with enough small tech company services to recognize a fierce push for growth, and experience has taught me to avoid a for-profit company that sells to you that hard. One day the growth will stop, and the cannibalizing begins.

    But a move to a non-profit model is, at least theoretically, a move in the right direction. I’m more willing to engage.

    I still don’t trust that they won’t change their mind down the road, but it’s a start.

    And the point about OpenAI is moot because being non-profit doesn’t make the actual purpose of the company any less shitty. Especially when Microsoft was feeding it money for the purpose of harvesting what they would create. They still had shitty motives and created a tool that is very ethically “questionable” at best, and that was true from the very beginning.The fact their ethics team was gutted the moment they tried to exercise their purpose tells you everything.

    The non-profit company created a tool that will be used primarily by for-profit companies and hurt individuals. The moniker barely applies.