• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • It can eventually help disabled people move, see, hear and talk.

    For everyday people, this will replace phones and computers completely. We will be able to project a private screen on any surface, even mid air. We will be thinking the words instead of saying them during phonecalls.

    Movies and games are going to be so immersive it’s probably going to cause some serious societal issues. Larping is going to become big I’m guessing.

    That’s just the surface stuff that’s easy to think of. It gets even nuttier if you think about recording and downloading dreams and memories, some of the really sci Fi stuff. The possibilities are literally endless. Obviously though, there’s a way to go, it’s still in its infancy.



  • Don’t get me wrong, I’m not volunteering but eventually it will be safe.

    Theres definitely barriers to overcome but to go with your analogy, I drive a car everyday on the highway even though a malfunction or even just an other user being stupid can easily lead to my death. That’s just to get to work or see friends. I could imagine myself braving worse to get to use full dive vr.

    But you couldn’t pay me to get into one of those death traps when cars were first invented though. I’m eager but I will definitely wait a while before jumping in.




  • Really easy to see where this is going.

    “open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation’s top prosecutors. (It’s worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)”

    Paid software that can be reined it so it doesn’t compete with Netflix and disney is fine, the open source stuff is satan spawn.

    The easy solution would be to go after the ones that distribute the pictures, this is only about keeping the gravy train going.




  • I’m gonna post the whole article because it’s garbage, has no substance and I don’t believe people should click on the link. Do better, GameSpot.

    "Bethesda is about to launch Starfield, but what’s coming next? Bethesda Game Studios is making The Elder Scrolls VI and then Fallout 5, so the studio is staying quite busy. In a new interview with GQ, Bethesda’s Todd Howard shared a few new morsels about The Elder Scrolls 6 and discussed when he might retire from making games.

    Starting off with the game’s announcement in June 2018, Howard said he often wonders if it was the right thing to announce it so early. “I have asked myself that a lot,” he said. “I don’t know. I probably would’ve announced it more casually.”

    Howard also confirmed that The Elder Scrolls 6, or whatever it’s called, does already have a codename but he would not reveal it. As for what he could say, Howard said the game aims to “fill that role of the ultimate fantasy-world simulator.”

    “And there are different ways to accomplish that given the time that has passed,” he said.

    Howard is 53 now and said it’s “weird for me” to think about retirement, something he believes is a “long, long way off.”

    “I want to do it forever,” he said. “I think the way I work will probably evolve, but… look at [71-year-old Mario creator and Nintendo legend Shigeru Miyamoto]. He’s still doing it,” Howard said.

    In addition to his duties on Starfield and The Elder Scrolls 6, Howard is an executive producer on the new Indiana Jones game in the works at Machine Games"










  • It depends for what kind of AI and but no, giving sources and building with just volunteer data is just not possible at our current technological level. I’m mostly talking about large llms because that’s what’s really at stake and they train on huge amounts of data. Like ALL of stack, GitHub, Reddit, etc. Just fine tuning them on a consumer level takes more than 50 000 question and answer pairs, that’s just one tiny superficial layer that’s added on top.

    Grammerly should absolutely add an opt out option to gain consumers trust, but forcing the the whole industry to do so is a disaster.

    If individuals can opt out, so will websites to “protect their users”. Then we get data hoarding, where stack and GitHub opt out of all open source options but sell it to the only ones that can now afford to build ais, Microsoft and google. it won’t include data of certain individuals, the few that opt out, but I’m guessing eventually the opt in will be directly into the terms of service of websites, you opt in or you fuck off.

    How does anyone except corporations benefit from this kind of circus. In 10 years, AI will be doing most office work. Google isn’t dumb and wants that profit. They and openai have all the data, they can strong arm or buy what they are missing. Restricting and legislating only widens their moat.


  • Most of the data is scraped, it’s not up to the website. You can’t give a list - of citation since it isn’t a search engine, it doesn’t know where the information comes from and it’s highly transformative, it melds information from hundreds of not thousand of different sources.

    If it worked only with volunteer work, there would simply be not enough data.

    Any law restricting data use in AI is only going to benefit corporations, there isn’t a solution for individual content creators. You can’t pay them for the drop in the bucket they add, thee logistics are insane. You can let them opt out, but then you need to do the same for whole websites which leads to a corporate hellscape where three companies own our whole economy since they are the only ones who can train ais.