• Trailblazing Braille Taser@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    76
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    The thing that people don’t understand yet is that LLMs are “yes men”.

    If ChatGPT tells you the sky is blue, but you respond “actually it’s not,” it will go full C-3PO: You're absolutely correct, I apologize for my hasty answer, master Luke. The sky is in fact green.

    Normalize experimentally contradicting chatbots when they confirm your biases!

    • Ookami38@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      25 minutes ago

      I’ve used chatGPT for argument advice before. Not, like, weaponizing it “hahah robot says you’re wrong! Checkmate!” but more sanity testing, do these arguments make sense, etc.

      I always try to strip identifying information from the stuff I input, so it HAS to pick a side. It gets it “right” (siding with the author/me) about half the time, it feels. Usually I’ll ask it to break down each sides argument individually, then choose one it agrees with and give a why.

    • Anivia@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      28 minutes ago

      Not always. Sometimes they will agree with you, other times they will double down on their previous message

    • Classy@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      10 hours ago

      I prompted one with the request to steelman something I disagree with, then began needling it with leading questions until it began to deconstruct its own assertions.