This online service is a demo for the XQuery library module "i-agldt", which allows calculation of the inter-annotator agreement for two annotators, both percentage agreement and Cohen's k, as described in Artstein and Poesio (2008).

The pecularity of this library is that it does not only allow calculation of inter-annotator agreement but also display of differences between sentences with links to the Arethusa environment, where such annotations are performed. This allows one to have a look at the linguistic trees and, if he is the author of the annotation, easily modify them. The experience of creating high-quality annotations is made easier and faster.

The results for the algorithm have been checked against those returned by the Python function nltk.metrics.agreement and the R function kappa2() .

Currently, it is possible to compare online only small documents (less than 51 sentences). If you plan to compare long ones, please download the library module and perform the computation locally, invoking the function lp:i-agldt().

When you compare two annotations, note that:

The system compares annotations for three layers: morphology, syntax, and (where available) semantics. For syntax three pairs of values are provided (Ragheb and Dickinson 2013; Kübler et al. 2009):

For each of these measures, two values are calculated: simple percentage agreement and Cohen's k, which takes into consideration agreement by chance.

References

Artstein Ron and Massimo Poesio. 2008. Inter-Coder Agreement for Computational Linguistics. Computational Linguistics 34(4):555-596.
Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20(1):37–46.
Kübler Sandra, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan & Claypool Publishers.
Ragheb Marwa and Markus Dickinson. 2013. Inter-annotator Agreement for Dependency Annotation of Learner Language. Proceedings of the Eighth Workshop on Innovative Use of NLP for building Educational Applications. 169-179.