The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

  • tvbusy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can’t figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.

  • Sk1ll_Issue@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

    Did we really need a study for that?

  • cheddar@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Me: no way, AI is very helpful, and if it isn’t then don’t use it

    created challenges in achieving the expected productivity gains

    achieving the expected productivity gains

    Me: oh, that explains the issue.

    • Bakkoda@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s hilarious to watch it used well and then human nature just kick in

      We started using some “smart tools” for scheduling manufacturing and it’s honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.

      Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    They tried implementing AI in a few our our systems and the results were always fucking useless. What we call “AI” can be helpful in some ways but I’d bet the vast majority of it is bullshit half-assed implementations so companies can claim they’re using “AI”

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      The one thing “AI” has improved in my life has been a banking app search function being slightly better.

      Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You’re telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

  • GreatAlbatross@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

    “But it works!”

    ‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

      • GreatAlbatross@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Large “language” models decreased my workload for translation. There’s a catch though: I choose when to use it, instead of being required to use it even when it doesn’t make sense and/or where I know that the output will be shitty.

    And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

    Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      Nooooo. I mean, we have about 80 years of history into AI research and the field is just full of overhyped promised that this particularly tech is the holy grail of AI to end in disappointment each time, but this time will be different! /s

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Replace joker for media and replace distract you from bank heist with convince you to hate AI then yes.

          • Flying Squid@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Do convince us why we should like something which is a massive ecological disaster in terms of fresh water and energy usage.

            Feel free to do it while denying climate change is a problem if you wish.

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I wrote this and feed it through chatGPT to help make it more readable. To me that’s pretty awesome. If I wanted I can have it written like an Elton John song. If that doesn’t convince you it’s fun and worth it then maybe the argument below could, or not. Either way I like it.


              I don’t think I’ll convince you, but there are a lot of arguments to make here.

              I heard a large AI model is equivalent to the emissions from five cars over its lifetime. And yes, the water usage is significant—something like 15 billion gallons a year just for a Microsoft data center. But that’s not just for AI; data centers are something we use even if we never touch AI. So, absent of AI, it’s not like we’re up in arms about the waste and usage from other technologies. AI is being singled out—it’s the star of the show right now.

              But here’s why I think we should embrace it: the potential. I’m an optimist and I love technology. AI bridges gaps in so many areas, making things that were previously difficult much easier for many people. It can be an equalizer in various fields.

              The potential with AI is fascinating to me. It could bring significant improvements in many sectors. Think about analyzing and optimizing power grids, making medical advances, improving economic forecasting, and creating jobs. It can reduce mundane tasks through personalized AI, like helping doctors take notes and process paperwork, freeing them up to see more patients.

              Sure, it consumes energy and has costs, but its potential is huge. It’s here and advancing. If we keep letting the media convince us to hate it, this technology will end up hoarded by elites and possibly even made illegal for the rest of us. Imagine having a pocket advisor for anything—mechanical issues, legal questions, gardening problems, medical concerns. We’re not there yet, but remember, the first cell phones were the size of a brick. The potential is enormous, and considering all the things we waste energy and resources on, this one is weighed against it benefits.

              • Flying Squid@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                2 months ago

                Not being able to use your own words to explain something to me and having the thing that is an ecological disaster that also lies all the time explain it to me instead really only reinforces my point that there’s no reason to like this technology.

                • Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  It is my own words. Wrote out the whole thing but I was never good with grammar and fully admit that often what I write is confusing or ambiguous. I can leverage chatgpt same way I would leverage spell check in word. I don’t see any problems there.

                  But if you don’t mind, I’m interested in the points discussed.

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn’t made my workload more…

    • HakFoo@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      They’ve got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it’s a financial tech firm handling twelve figures a year of business-- the last place where people will put up with “plausible bullshit” in their products.

      I grudgingly installed the Copilot plugin, but I’m not sure what it can do for me better than a snippet library.

      I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify “yes, there are n return values, so write n test cases” and “You’re going to actually have to CALL the function under test”, but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn’t need to burn half a barrel of oil for that.

      I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

      Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

      I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent “here’s why it’s broken” sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.

      Tell me when Stable Diffusion figures out that “Carrying battleaxe” doesn’t mean “katana randomly jutting out from forearms”, maybe at that point AI will be good enough for code.