Inter-annotator Agreement (IAA)
Data scientists have long used inter-annotator agreement to measure how well multiple annotators can make the same annotation decision for a certain label category or class. Now, this information can be found directly through Datasaur's dashboards. This handy calculation can help determine the clarity and consistent reproducibility of your results.
- Discard will be presented in red.
- Tentative will be presented in orange.
- Good will be presented in green.
✍ Inter-Annotator Agreement only applies to the completed projects. If you have projects which are in progress in your team dashboard, they will not be considered for the IAA equation until marked as completed.
You can view the Inter-annotator Agreement page by navigating to Analytics under the left sidebar. Please note that only Team Admins can access this page.
You can view the IAA for specific projects by adding the project filter.
You can also set the label set filter for the IAA. When filtering by label set, the IAA calculation will include all label sets with the same label set items. Look at the example below for the detailed explanation.
Let's say there are five completed projects in your dashboard. Each project has different label sets.
Some projects have label sets with the exact same label set items (ex: NER for News and Top 5 Stories). This will determine how many label sets that are used for Inter-Annotator Agreement.
We will combine all projects with the same label set items into one label set. For this sample case, the label sets below will be shown in the Label Sets field.
- NER for News Label Set
- Sentiment Analysis Label Set
- English POS Label Set
Now, we can filter by label set.