Which LLM said that? - watermarking generated text
- Track:
- PyData: LLMs
- Type:
- Talk
- Level:
- intermediate
- Room:
- Terrace 2A
- Start:
- 11:55 on 12 July 2024
- Duration:
- 30 minutes
Abstract
With the emergence of large generative language models there comes a problem of assigning the authorship of the AI-generated texts to its original source. This raises many concerns regarding eg. social engineering, fake news generation and cheating in many educational assignments. While there are several black-box methods for detecting if text was written by human or LLM they have significant issues.
I will discuss how by watermarking you can equip your LLM with a mechanism that undetectable to human eye can give you the means of verifying if it was the true source of a generated text.