Incremental ELMVIS for Unsupervised Learning

Anton Akusok, Emil Eirola, Yoan Miche, Ian Oliver, Kaj-Mikael Björk, Andrey Gritsenko, Stephen Baek, Amaury Lendasse

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


An incremental version of the ELMVIS+ method is proposed in this paper. It iteratively selects a few best fitting data samples from a large pool, and adds them to the model. The method keeps high speed of ELMVIS+ while allowing for much larger possible sample pools due to lower memory requirements. The extension is useful for reaching a better local optimum with greedy optimization of ELMVIS, and the data structure can be specified in semi-supervised optimization. The major new application of incremental ELMVIS is not to visualization, but to a general dataset processing. The method is capable of learning dependencies from non-organized unsupervised data—either reconstructing a shuffled dataset, or learning dependencies in complex high-dimensional space. The results are interesting and promising, although there is space for improvements.
Original languageEnglish
Title of host publicationProceedings of ELM-2016
Number of pages11
Place of PublicationCham
Publication date26.05.2017
ISBN (Print)978-3-319-57420-2
ISBN (Electronic)978-3-319-57421-9
Publication statusPublished - 26.05.2017
MoE publication typeA4 Article in conference proceedings

Publication series

Name Proceedings in Adaptation, Learning and Optimization (PALO)


  • 512 Business and Management
  • ELM
  • Visualization
  • Assignment problem
  • Unsupervised learning
  • Semi-supervised learning


Dive into the research topics of 'Incremental ELMVIS for Unsupervised Learning'. Together they form a unique fingerprint.

Cite this