• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    No surprise, since there’s not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well…

    And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.

    Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.

      What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

      And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

      Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.

      When I search for “misuse of AI” I get a ton of results from people talking about exactly that.