Validating Untrained Human Annotations Using Extreme Learning Machines

Thomas Forss, Leonardo Espinosa-Leal*, Anton Akusok, Amaury Lendasse, Kaj-Mikael Björk

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


We present a process for validating and improving annotations made by untrained humans using a two-step machine learning algorithm. The initial validation algorithm is trained on a high quality annotated subset of the data that the untrained humans are asked to annotate. We continue by using the machine learning algorithm to predict other samples that are also annotated by the humans and test several approaches for joining the algorithmic annotations with the human annotations, with the intention of improving the performance beyond using either approach individually. We show that combining human annotations with the algorithmic predictions can improve the accuracy of the annotations.
Original languageEnglish
Title of host publicationProceedings of ELM2019
EditorsJiuwen Cao, Chi Man Vong, Yoan Miche, Amaury Lendasse
Number of pages10
Place of PublicationCham
Publication date2021
ISBN (Print)978-3-030-58988-2
ISBN (Electronic)978-3-030-58989-9
Publication statusPublished - 2021
MoE publication typeA4 Article in conference proceedings

Publication series

NameProceedings in Adaptation, Learning and Optimization
ISSN (Print)2363-6084
ISSN (Electronic)2363-6092


  • 113 Computer and information sciences


Dive into the research topics of 'Validating Untrained Human Annotations Using Extreme Learning Machines'. Together they form a unique fingerprint.

Cite this