Can GPT Alleviate the Burden of Annotation?
DOI
10.3233/FAIA230961
Document Type
Conference Paper
Publication Date
12-7-2023
Publication Title
Frontiers in Artificial Intelligence and Applications
Volume
379
First Page
157
Last Page
166
ISSN
9226389
Keywords
Annotation, Generative LLMs, GPT-4, Interrater Agreement
Abstract
Manual annotation is just as burdensome as it is necessary for some legal text analytic tasks. Given the promising performance of Generative Pretrained Transformers (GPT) on a number of different tasks in the legal domain, it is natural to ask if it can help with text annotation. Here we report a series of experiments using GPT-4 and GPT 3.5 as a pre-annotation tool to determine whether a sentence in a legal opinion describes a legal factor. These GPT models assign labels that human annotators subsequently confirm or reject. To assess the utility of pre-annotating sentences at scale, we examine the agreement among gold-standard annotations, GPT's pre-annotations, and law students' annotations. The agreements among these groups support that using GPT-4 as a pre-annotation tool is a useful starting point for large-scale annotation of factors.
Open Access
Hybrid_Gold
Repository Citation
Gray, M., Savelka, J., Oliver, W., & Ashley, K. (2023). Can GPT Alleviate the Burden of Annotation?. Frontiers in Artificial Intelligence and Applications, 379, 157-166. https://doi.org/10.3233/FAIA230961