Abstract
Sequence-to-sequence models have proven to be highly successful in learning morphological inflection from examples as the series of SIGMORPHON/CoNLL shared tasks have shown. It is usually assumed, however, that a linguist working with inflectional examples could in principle develop a gold standard-level morphological analyzer and generator that would surpass a trained neural network model in accuracy of predictions, but that it may require significant amounts of human labor. In this paper, we discuss an experiment where a group of people with some linguistic training develop 25+ grammars as part of the shared task and weigh the cost/benefit ratio of developing grammars by hand. We also present tools that can help linguists triage difficult complex morphophonological phenomena within a language and hypothesize inflectional class membership. We conclude that a significant development effort by trained linguists to analyze and model morphophonological patterns are required in order to surpass the accuracy of neural models.
Original language | English |
---|---|
Pages | 162-170 |
DOIs | |
Publication status | Published - 2020 |
Event | 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology - Seattle, USA, Online Duration: 1 Jan 2020 → … |
Conference
Conference | 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology |
---|---|
Period | 1/01/20 → … |