Evaluating Confidence Intervals for ELM Predictions

Anton Akusok, Yoan Miche, Kaj-Mikael Björk, Rui Nian, Paula Lauren, Amaury Lendasse

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

This paper proposes a way of providing more useful and interpretable results for ELM models by adding confidence intervals to predictions. Unlike a usual statistical approach with Mean Squared Error (MSE) that evaluates an average performance of an ELM model over the whole dataset, the proposed method computed particular confidence intervals for each data sample. A confidence for each particular sample makes ELM predictions more intuitive to interpret, and an ELM model more applicable in practice under task-specific requirements. The method shows good results on both toy and a real skin segmentation datasets. On a toy dataset, the predicted confidence intervals accurately represent a variable magnitude noise. On a real dataset, classification with a confidence interval improves the precision at the cost of recall.
Original languageEnglish
Title of host publicationProceedings of ELM-2015
Number of pages10
Volume2
Place of PublicationCham
PublisherSpringer
Publication date03.01.2016
Pages413-422
ISBN (Print)978-3-319-28372-2
ISBN (Electronic)978-3-319-28373-9
DOIs
Publication statusPublished - 03.01.2016
MoE publication typeA4 Article in conference proceedings

Publication series

NameProceedings in Adaptation, Learning and Optimization (PALO)
Volume7

Keywords

  • 512 Business and Management
  • Extreme learning machines
  • Confidence
  • Confidence interval
  • Regression
  • Image segmentation
  • Skin segmentation
  • Classification
  • Interpretability
  • Big data

Fingerprint

Dive into the research topics of 'Evaluating Confidence Intervals for ELM Predictions'. Together they form a unique fingerprint.

Cite this