Validating Untrained Human Annotations Using Extreme Learning Machines

Thomas Forss, Leonardo Espinosa-Leal*, Anton Akusok, Amaury Lendasse, Kaj-Mikael Björk

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review


We present a process for validating and improving annotations made by untrained humans using a two-step machine learning algorithm. The initial validation algorithm is trained on a high quality annotated subset of the data that the untrained humans are asked to annotate. We continue by using the machine learning algorithm to predict other samples that are also annotated by the humans and test several approaches for joining the algorithmic annotations with the human annotations, with the intention of improving the performance beyond using either approach individually. We show that combining human annotations with the algorithmic predictions can improve the accuracy of the annotations.
Original languageEnglish
Title of host publicationProceedings of ELM2019
EditorsJiuwen Cao, Chi Man Vong, Yoan Miche, Amaury Lendasse
Publication date12.09.2020
ISBN (Print)978-3-030-58988-2
ISBN (Electronic)978-3-030-58989-9
Publication statusPublished - 12.09.2020
MoE publication typeA3 Book chapter

Publication series

NameProceedings of ELM2019
ISSN (Print)2363-6084
ISSN (Electronic)2363-6092

Fingerprint Dive into the research topics of 'Validating Untrained Human Annotations Using Extreme Learning Machines'. Together they form a unique fingerprint.

  • Cite this

    Forss, T., Espinosa-Leal, L., Akusok, A., Lendasse, A., & Björk, K-M. (2020). Validating Untrained Human Annotations Using Extreme Learning Machines. In J. Cao, C. M. Vong, Y. Miche, & A. Lendasse (Eds.), Proceedings of ELM2019 (pp. 89-98). (Proceedings of ELM2019; Vol. 14). Springer.