Increasingly, the authors of works being used to train large language models are complaining (and rightfully so) that they never gave permission for such a use-case. If I were an LLM company, I’d be seriously looking for a Plan B right now, whether that’s engaging publishing companies to come up with new licensing options, paying 1,000,000 grad students to write 1,000,000 lines of prose, or something else entirely.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    The Berne Convention contains an enumerated list of things that it recognizes as things that can be restricted by IP law. Training AIs is not among them.

    • will_a113@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Derivative works is though - and the cases slowly plodding through the court system right now are going to demand a decision on whether an LLM or its creations count as derivative works.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        For it to be a derivative work you’re going to have to prove that the model contains a substantial portion of the material it’s supposedly a derivative work of. Good luck with that, neural nets simply don’t work that way.