Evaluating Confidence Intervals for ELM Predictions

Anton Akusok, Yoan Miche, Kaj-Mikael Björk, Rui Nian, Paula Lauren, Amaury Lendasse

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


This paper proposes a way of providing more useful and interpretable results for ELM models by adding confidence intervals to predictions. Unlike a usual statistical approach with Mean Squared Error (MSE) that evaluates an average performance of an ELM model over the whole dataset, the proposed method computed particular confidence intervals for each data sample. A confidence for each particular sample makes ELM predictions more intuitive to interpret, and an ELM model more applicable in practice under task-specific requirements. The method shows good results on both toy and a real skin segmentation datasets. On a toy dataset, the predicted confidence intervals accurately represent a variable magnitude noise. On a real dataset, classification with a confidence interval improves the precision at the cost of recall.
Original languageEnglish
Title of host publicationProceedings of ELM-2015
Number of pages10
Place of PublicationCham
Publication date03.01.2016
ISBN (Print)978-3-319-28372-2
ISBN (Electronic)978-3-319-28373-9
Publication statusPublished - 03.01.2016
MoE publication typeA4 Article in conference proceedings

Publication series

NameProceedings in Adaptation, Learning and Optimization (PALO)


  • 512 Business and Management
  • Extreme learning machines
  • Confidence
  • Confidence interval
  • Regression
  • Image segmentation
  • Skin segmentation
  • Classification
  • Interpretability
  • Big data


Dive into the research topics of 'Evaluating Confidence Intervals for ELM Predictions'. Together they form a unique fingerprint.

Cite this