• WanderingThoughts@europe.pub
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 day ago

    as they’ve been designed to

    Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.

        • Natanael@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          And from reinforcement learning (specifically, making it repeat tasks where the answer can be computer checked)