Can AI-Generated Text be Reliably Detected?

Bibliographic Details
Title: Can AI-Generated Text be Reliably Detected?
Authors: Sadasivan, Vinu Sankar, Kumar, Aounon, Balasubramanian, Sriram, Wang, Wenxiao, Feizi, Soheil
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Machine Learning
More Details: Large Language Models (LLMs) perform impressively well in various applications. However, the potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use. Consequently, the reliable detection of AI-generated text has become a critical area of research. AI text detectors have shown to be effective under their specific settings. In this paper, we stress-test the robustness of these AI text detectors in the presence of an attacker. We introduce recursive paraphrasing attack to stress test a wide range of detection schemes, including the ones using the watermarking as well as neural network-based detectors, zero shot classifiers, and retrieval-based detectors. Our experiments conducted on passages, each approximately 300 tokens long, reveal the varying sensitivities of these detectors to our attacks. Our findings indicate that while our recursive paraphrasing method can significantly reduce detection rates, it only slightly degrades text quality in many cases, highlighting potential vulnerabilities in current detection systems in the presence of an attacker. Additionally, we investigate the susceptibility of watermarked LLMs to spoofing attacks aimed at misclassifying human-written text as AI-generated. We demonstrate that an attacker can infer hidden AI text signatures without white-box access to the detection method, potentially leading to reputational risks for LLM developers. Finally, we provide a theoretical framework connecting the AUROC of the best possible detector to the Total Variation distance between human and AI text distributions. This analysis offers insights into the fundamental challenges of reliable detection as language models continue to advance. Our code is publicly available at https://github.com/vinusankars/Reliability-of-AI-text-detectors.
Comment: Published in Transactions on Machine Learning Research (TMLR)
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2303.11156
Accession Number: edsarx.2303.11156
Database: arXiv
FullText Text:
  Availability: 0
CustomLinks:
  – Url: http://arxiv.org/abs/2303.11156
    Name: EDS - Arxiv
    Category: fullText
    Text: View this record from Arxiv
    MouseOverText: View this record from Arxiv
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20230317&spage=&pages=&title=Can AI-Generated Text be Reliably Detected?&atitle=Can%20AI-Generated%20Text%20be%20Reliably%20Detected%3F&aulast=Sadasivan%2C%20Vinu%20Sankar&id=DOI:
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edsarx
DbLabel: arXiv
An: edsarx.2303.11156
RelevancyScore: 1051
AccessLevel: 3
PubType: Report
PubTypeId: report
PreciseRelevancyScore: 1051.00512695313
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Can AI-Generated Text be Reliably Detected?
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Sadasivan%2C+Vinu+Sankar%22">Sadasivan, Vinu Sankar</searchLink><br /><searchLink fieldCode="AR" term="%22Kumar%2C+Aounon%22">Kumar, Aounon</searchLink><br /><searchLink fieldCode="AR" term="%22Balasubramanian%2C+Sriram%22">Balasubramanian, Sriram</searchLink><br /><searchLink fieldCode="AR" term="%22Wang%2C+Wenxiao%22">Wang, Wenxiao</searchLink><br /><searchLink fieldCode="AR" term="%22Feizi%2C+Soheil%22">Feizi, Soheil</searchLink>
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2023
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: Computer Science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Computation+and+Language%22">Computer Science - Computation and Language</searchLink><br /><searchLink fieldCode="DE" term="%22Computer+Science+-+Artificial+Intelligence%22">Computer Science - Artificial Intelligence</searchLink><br /><searchLink fieldCode="DE" term="%22Computer+Science+-+Machine+Learning%22">Computer Science - Machine Learning</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: Large Language Models (LLMs) perform impressively well in various applications. However, the potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use. Consequently, the reliable detection of AI-generated text has become a critical area of research. AI text detectors have shown to be effective under their specific settings. In this paper, we stress-test the robustness of these AI text detectors in the presence of an attacker. We introduce recursive paraphrasing attack to stress test a wide range of detection schemes, including the ones using the watermarking as well as neural network-based detectors, zero shot classifiers, and retrieval-based detectors. Our experiments conducted on passages, each approximately 300 tokens long, reveal the varying sensitivities of these detectors to our attacks. Our findings indicate that while our recursive paraphrasing method can significantly reduce detection rates, it only slightly degrades text quality in many cases, highlighting potential vulnerabilities in current detection systems in the presence of an attacker. Additionally, we investigate the susceptibility of watermarked LLMs to spoofing attacks aimed at misclassifying human-written text as AI-generated. We demonstrate that an attacker can infer hidden AI text signatures without white-box access to the detection method, potentially leading to reputational risks for LLM developers. Finally, we provide a theoretical framework connecting the AUROC of the best possible detector to the Total Variation distance between human and AI text distributions. This analysis offers insights into the fundamental challenges of reliable detection as language models continue to advance. Our code is publicly available at https://github.com/vinusankars/Reliability-of-AI-text-detectors.<br />Comment: Published in Transactions on Machine Learning Research (TMLR)
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: Working Paper
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2303.11156" linkWindow="_blank">http://arxiv.org/abs/2303.11156</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsarx.2303.11156
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2303.11156
RecordInfo BibRecord:
  BibEntity:
    Subjects:
      – SubjectFull: Computer Science - Computation and Language
        Type: general
      – SubjectFull: Computer Science - Artificial Intelligence
        Type: general
      – SubjectFull: Computer Science - Machine Learning
        Type: general
    Titles:
      – TitleFull: Can AI-Generated Text be Reliably Detected?
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Sadasivan, Vinu Sankar
      – PersonEntity:
          Name:
            NameFull: Kumar, Aounon
      – PersonEntity:
          Name:
            NameFull: Balasubramanian, Sriram
      – PersonEntity:
          Name:
            NameFull: Wang, Wenxiao
      – PersonEntity:
          Name:
            NameFull: Feizi, Soheil
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 17
              M: 03
              Type: published
              Y: 2023
ResultId 1