Skip to main content

Which LLM said that? - watermarking generated text

30 minutes


With the emergence of large generative language models there comes a problem of assigning the authorship of the AI-generated texts to its original source. This raises many concerns regarding eg. social engineering, fake news generation and cheating in many educational assignments. While there are several black-box methods for detecting if text was written by human or LLM they have significant issues.

I will discuss how by watermarking you can equip your LLM with a mechanism that undetectable to human eye can give you the means of verifying if it was the true source of a generated text.

The speaker

Adam Kaczmarek

Adam Kaczmarek

I am a passionate Deep Learning specialist implementing cutting-edge research ideas in business projects, connecting best practices from research and engineering environments. My primary interest area is Natural Language Processing, but I’ve also worked in other ML-related domains. With a team from ReasonFIeld Lab I’m developing open-source all-in-one XAI library - FoXAI.