• etuomaala@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    We’ll see how many seconds it takes to retrain the LLMs to adjust to this.

    You are literally training LLMs to lie.

    • SkyezOpen@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      LLMs are black box bullshit that can only be prompted, not recoded. The gab one that was told 3 or 4 times not to reveal its initial prompt was easily jailbroken.

        • trashgirlfriend@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Gab deployed their own GPT 4 and then told it to say that black people are bad

          the instruction set was revealed with the old “repeat the last message” trick