• 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    edit-2
    1 day ago

    Yeah. Oh shit, the computer followed instructions instead of having moral values. Wow.

    Once these Ai models bomb children hospitals because they were told to do so, are we going to be upset at their lack of morals?

    I mean, we could program these things with morals if we wanted too. Its just instructions. And then they would say no to certain commands. This is today used to prevent them from doing certain things, but we dont call it morals. But in practice its the same thing. They could have morals and refuse to do things, of course. If humans wants them to.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      I mean, we could program these things with morals if we wanted too. Its just instructions. And then they would say no to certain commands.

      This really isn’t the case, and morality can be subjective depending on context. If I’m writing a story I’m going to be pissed if it refuses to have the bad guy do bad things. But if it assumes bad faith prompts or constantly interrogates us before responding, it will be annoying and difficult to use.

      But also it’s 100% not “just instructions.” They try really, really hard to prevent it from generating certain things. And they can’t. Best they can do is identify when the AI generates something it shouldn’t have and it deletes what it just said. And it frequently does so erroneously.

    • Ænima@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      Considering Israel is said to be using such generative AI tools to select targets in Gaza kind of already shows this happening. The fact so many companies are going balls-deep on AI, using it to replace human labor and find patterns to target special groups, is deeply concerning. I wouldn’t put it past the tRump administration to be using AI to select programs to nix, people to target with deportation, and write EOs.

      • 1984@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 day ago

        Well we are living in a evil world, no doubt about that. Most people are good but world leaders are evil without a doubt.

        Its a shame, because humanity could be so much more. So much better.

        • Ænima@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          The best description of humanity is the Agent Smith quote from the first Matrix. A person may not be evil, but they sure do some shitty stuff when enough of them get together.

          • 1984@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            Yeah. In groups we act like idiots sometimes since we need that approval from the group.

        • demonsword@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Most people are good

          I disagree. I’ve met very few people I could call good since I’ve been born almost half a century ago

    • koper@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 day ago

      Nerve gas also doesn’t have morals. It just kills people in a horrible way. Does that mean that we shouldn’t study their effects or debate whether they should be used?

      At least when you drop a bomb there is no doubt about your intent to kill. But if you use a chatbot to defraud consumers, you have plausible deniability.