• darthelmet@lemmy.world
      link
      fedilink
      arrow-up
      82
      arrow-down
      1
      ·
      6 months ago

      Yeah. It’s more like:

      Researchers: “Look at our child crawl! This is a big milestone. We can’t wait to see what he’ll do in the future.

      CEOs: Give that baby a job!

      AI stuff was so cool to learn about in school, but it was also really clear how much further we had to go. I’m kind of worried. We already had one period of AI overhype lead to a crash in research funding for decades. I really hope this bubble doesn’t do the same thing.

      • Mossy Feathers (She/They)@pawb.social
        link
        fedilink
        arrow-up
        31
        arrow-down
        3
        ·
        6 months ago

        I’m… honestly kinda okay with it crashing. It’d suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don’t really have the corporate ethics or morals for it, nor do we have the economic structure for it.

        AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What’s the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I’m not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it’s paired with a general understanding of other fields).

        However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?

        So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that’s ignoring that, from what I’ve seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you’re basically guaranteed to replace humans with AI.

        We just don’t have the society for that. We should be moving in that direction, but we’re not even close to being there yet. So, again, as much potential as AI has, I’m kinda okay if it crashes. There aren’t enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.

      • ed_cock@feddit.de
        link
        fedilink
        arrow-up
        21
        arrow-down
        1
        ·
        6 months ago

        The sheer waste of energy and mass production of garbage clogging up search results alone is enough to make me hope the bubble will pop reeeeal soon. Sucks for research but honestly the bad far outweighs the good right now, it has to die.

        • MonkeMischief@lemmy.today
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          6 months ago

          Yeah search is pretty useless now. I’m so over it. Trying to fix problems always has the top 15 results be like:

          “You might ask yourself, how is Error-13 on a Maytag Washer? Well first, let’s start with What Is a Maytag Washer. You would be right to assume washing clothes has been a task for thousands of years. The first washing machine was invented…” (Yes I wrote that by hand, how’d I do? Lol)

          It’s the same as how I really stopped caring if crypto was gonna “revolutionize money” once it became a gold rush to horde GPUs and subsequently any other component you could store a hash on.

          R&D and open source for the advancement of humanity is cool.

          Building enormous farms and burning out powerful components that could’ve been used for art and science, to instead prove-that-you-own-a-receipt-for-an-ugly-monkey-jpeg hoping it explodes in value, is apalling.

          I’m sure there was an ethical application way back there somewhere, but it just becomes a pump-and-dump scheme and ruins things for a lot of good people.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 months ago

        Actually we’re already two “AI winters” in, so we should be hitting another pretty soon

          • Match!!@pawb.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            AI as a field initially started getting big in the 1960s with machine translation and perceptrons (super-basic neural networks), which started promising but hit a wall basically immediately. Around 1974 the US military cut most of their funding to their AI projects because they weren’t working out, but by 1980 they started funding AI projects again because people had invented new AI approaches. Around 1984 people coined the term “AI winter” for the time when funding had dried up, which incidentally was right before funding dried up again in the 90s until around the 2010s.

    • MagicShel@programming.dev
      link
      fedilink
      arrow-up
      27
      arrow-down
      2
      ·
      edit-2
      6 months ago

      The more you use generative AI, the less amazing it is. Don’t get me wrong, I enjoy it, but it really can only impress you when it’s talking about a subject you know nothing of. The pictures are terrible, though way better than I could do. The coding is terrible, although it’s amazingly fast for similar quality to a junior developer. The prose seems amazing at first, but as you use it over and over you realize it’s quite bland and it’s continually sort of reverting to a default voice even if it can write really good short passages (specific to ChatGPT-like instruct models here, not seen that with other models).

      I’ve been playing with generative AI for about 5 years, and it has certainly gotten much better in some ways, but it’s still just a neat toy in search of a problem it can solve. There’s a lot of money going into it in the hope it will improve to the point where it can solve some of the things we really want it to, but I’m not sure it ever reliably will. Maybe some other AI technology, but not LLM.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        6 months ago

        It saves me 10-20 hours of work every week as a corpo video producer, and I use that time to experiment with AI - which has allowed our small team to produce work that would be completely outside our resources otherwise. Without a single additional breakthrough, we’d be finding novel ways to be productive with the current form of generative AI for decades. I understand the desire to temper expectations, and I agree that companies and providers are not handling this well at all. But the tech is already solid. It’s just being misused more often than it’s being wielded well.

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          12
          arrow-down
          1
          ·
          edit-2
          6 months ago

          I don’t have the experience to refute that. But I see the same things from developers all the time swearing AI saves them hours, but that’s a domain I know well and AI does certain very limited things quite well. It can spit out boilerplate stuff pretty quick and often with few enough errors that I can fix them faster than I could’ve written everything by hand. But it very much relies on me knowing what I’m doing and immediately recognizing the garbage for what it is.

          It does make me a little bit faster at the stuff I’m already good at, at the cost of leading me down some wild rabbit holes on things I don’t know so well. It’s not nothing, but it’s not what I would call professional-grade.

        • suction@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          6 months ago

          Nobody doubts that it’s useful for helping with bland low-tier work like corpo videos that people are forced to watch to keep their jobs.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            6 months ago

            I just meant I work for a corporation. I produce videos for marketing, been doing it for 25 years.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      6 months ago

      Generative AI is amazing for some niche tasks that are not what it’s being used for

        • Waraugh@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          14
          arrow-down
          2
          ·
          6 months ago

          Creating drafts for white papers my boss asks for every week about stupid shit on his mind. Used to take a couple days now it’s done in one day at most and I spend my Friday doing chores and checking on my email and chat every once in a while until I send him the completed version before logging out for the weekend.

          • BluesF@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            6 months ago

            Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            I understand this perspective, because the text, image, audio, and video generators all default to the most generic solution. I challenge you to explore past the surface with the simple goal of examining something you enjoy from new angles. All of the interesting work in generative AI is being done at the edges of the models’ semantic spaces. Avoid getting stuck in workflows. Try new ones regularly and compare their efficacies. I’m constantly finding use cases that I end up putting to practical use - sometimes immediately, sometimes six months later when the need arises.

    • eee@lemm.ee
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      6 months ago

      It CAN BE amazing in certain situations. Ceo tomfoolery is what’s making generative Ai become a joke to the average user.

      • ChaoticNeutralCzech@feddit.de
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        edit-2
        6 months ago

        Yes. It’s not wrong 100% of the time, otherwise you could make a fortune by asking it for investment advice and then doing the opposite.

        What happened is like the current robot craze: they made the technology resemble humans, which drives attention and money. Specialized “robots” can indeed perform tedious tasks (CNC, pick-and-place machines) or work safely with heavier objects (construction equipment). Similarly, we can use AI to identify data forgery or fold proteins. If we try to make either human-like, they will appear to do a wide variety of tasks (which drives sales & investment) but not be great at any of them. You wouldn’t buy a humanoid robot just to reuse your existing shovel if excavators are cheaper. (Yes, I don’t think a humanoid robot with digging capabilities will ever be cheaper than a standard excavator).

        • Match!!@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          It’s actually really frustrating that LLMs have gotten all the funding when we’re finally at the point where we can build reasonably priced purpose-built AI and instead the CEOs want to push trashbag LLMs on everything

          • ChaoticNeutralCzech@feddit.de
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            Well, a conversational AI with sub-human abilities still has some uses. Notably scamming people en masse so human email scammers will be put out of their jobs /s

    • suction@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      6 months ago

      Uh yeah so amazing I could watch those “xyz but it’s Balenciaga” clips for days!!! /s