Imagine an AGI (Artificial General Intelligence) that could perform any task a human can do on a computer, but at a much faster pace. This AGI could create an operating system, produce a movie better than anything you’ve ever seen, and much more, all while being limited to SFW (Safe For Work) content. What are the first things you would ask it to do?

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      You’re assuming a human could do that on a computer, though. It’s kind of hard to improve on that basic and very mature technology.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I put more weight on the description text, but yes that was in the title.

          Even if we assume it’s a god, though, I’m not sure there’s a way to improve on most kinds of generators more than incrementally. I don’t expect it would improve on “the wheel” either.

          • jerry@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            I’m sure there are methods of generating electricity that we haven’t even stumbled on.

              • jerry@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                I think we’re pretty far from the peak understanding of almost everything. There are so many discoveries still to be made.

                • CanadaPlus@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Based on what? Sure, I’m guessing we’re just starting with planetary science and cosmology, but power generation has been explored to death and we’re still using the same basic alternator design as Tesla was.

  • quotheraven404@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I’d want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.

    • SirGolan@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That’s possible now. I’ve been working on such a thing for a bit now and it can generally do all that, though I wouldn’t advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn’t just respond to commands but also figures out what needs to be done and does it independently.

      • quotheraven404@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?

        I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.

        • SirGolan@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you’re suggesting, nobody can guarantee it won’t get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.