THE LOCAL STACK
(Local) AI  ·  Leuven
events  ·  meetups  ·  talks  ·  conferences

ESANN 2026

34th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
22–24 April 2026  ·  UCLouvain & KU Leuven  ·  Bruges, Belgium  ·  Hybrid (in-person + online)

About the Conference

34th edition  ·  since 1993

ESANN is one of Europe’s longest-running academic conferences on machine learning and neural networks — running continuously since 1993. What makes it distinct from the larger international venues (NeurIPS, ICML, ICLR) is its format: a single track, running over three full days, with every speaker in the same room. No parallel sessions, no missing a talk because something else was happening at the same time. Everyone hears the same programme.

The conference draws researchers from across Europe and beyond working on the full spectrum of machine learning: from the mathematical foundations of neural networks and learning theory, through deep learning and generative models, to applications in healthcare, language, time series, and scientific computing. Papers are peer-reviewed and published through a consistent process managed by UCLouvain, with KU Leuven as a consistent institutional partner.

The 2026 edition is held at the Crowne Plaza Bruges, in the city centre — and runs in hybrid mode, so remote participation is possible. For researchers in the Leuven area, Bruges is 45 minutes by train. The conference dinner takes place at a Bruges brewery, which has become something of a tradition.

What Gets Presented

ESANN accepts around 100 papers across a consistent set of research themes. Invited sessions — organised by researchers who lead a subfield — sit alongside contributed papers and poster spotlights. Themes from recent editions that recur year over year:

Neural Network Theory & Learning

Generalisation bounds, learning dynamics, approximation theory, prototype-based learning (LVQ), reservoir computing, spiking neural networks, continual and incremental learning.

Deep Learning & Representation

Self-supervised learning, contrastive methods, vision transformers, generative models (VAEs, diffusion), graph neural networks, foundation models, model compression.

Unsupervised Learning & Dimensionality Reduction

Clustering, spectral methods, manifold learning (UMAP, t-SNE, MDS), kernel methods, latent variable models, anomaly detection.

Explainable AI & Trustworthy ML

Post-hoc explanations, concept-based interpretability, SHAP, fairness-aware learning, privacy, adversarial robustness, uncertainty quantification.

Time Series, Signals & Dynamical Systems

Recurrent networks, state-space models, neural ODEs, forecasting, biomedical signals (EEG, ECG), physics-informed networks, reservoir computing.

Applications: NLP, Vision, Healthcare

Language models, semantic search, medical imaging, sports analytics, cognitive modelling, industrial inspection, scientific discovery with ML.

Format note: ESANN runs as invited sessions (2–3 hour thematic blocks organised by researchers in that subfield, mixing overview talks and contributed papers) plus individual contributed papers and poster spotlights. The single-track format means every attendee follows the same programme. 100+ speakers across 3 days.

Organiser

Michel Verleysen

Professor — ICTEAM Institute, UCLouvain (Université catholique de Louvain)

Michel Verleysen is the driving force behind ESANN — he has chaired the conference for most of its 34-year history and manages the editorial process that has kept its review standards consistent. His research is in statistical machine learning, with particular expertise in dimensionality reduction: the problem of taking high-dimensional data (images, gene expression profiles, sensor arrays) and finding lower-dimensional representations that preserve meaningful structure.

His work on manifold learning and information-theoretic approaches to dimensionality reduction is foundational to how the field thinks about methods like t-SNE and UMAP. The fact that ESANN has a disproportionately strong showing of unsupervised learning and dimensionality reduction papers is not a coincidence — it reflects the organiser’s research community. His group at UCLouvain (ICTEAM) has produced some of the most cited work on the theoretical analysis of these methods.

Dimensionality Reduction Manifold Learning Statistical ML Neural Networks Conference Chair

What These Topics Are About

Artificial Neural Networks — Where the Maths Lives

The “artificial neural networks” in ESANN’s title is not marketing language for deep learning — it means the actual mathematical study of how neural network architectures learn and generalise. This includes questions like: why do overparameterised networks generalise at all? What determines whether a network can memorise a dataset or learns something general? How should gradients flow through deep architectures without vanishing or exploding?

This theoretical work might seem distant from practical applications, but it underlies most of the design decisions in modern deep learning. The fact that transformers use layer normalisation, that residual connections work the way they do, that certain activation functions are preferred over others — all of this traces back to research on the mathematical properties of learning in networks. ESANN publishes a lot of this foundational work.

Dimensionality Reduction — Seeing High-Dimensional Data

Most interesting data lives in very high-dimensional spaces — a genome, a document embedding, a medical scan represented as thousands of pixel values. Humans can only think in two or three dimensions. Dimensionality reduction is the set of techniques for projecting high-dimensional data into a lower-dimensional space while preserving the structure that matters: clusters, distances, topological relationships.

t-SNE (t-distributed stochastic neighbour embedding) and UMAP are the methods you see most in data science practice. ESANN has published influential work on both their strengths and their failure modes — particularly the ways they can distort global structure while preserving local neighbourhood relationships. Knowing what your visualisation is lying about is as important as knowing what it reveals.

Continual Learning — What Happens After Training

Standard machine learning assumes you train a model once on a fixed dataset, then deploy it. The world does not cooperate with this assumption. Distributions shift, new classes appear, old patterns change. A model trained on last year’s data without updating will drift from reality; a model naively updated on new data will often catastrophically forget what it previously learned. This is called catastrophic forgetting.

Continual learning (also called lifelong learning or incremental learning) is the research area that tries to solve this: building models that can learn new things without overwriting what they already know, adapting to new data streams without storing everything forever. ESANN has consistently been a strong venue for this work, particularly for methods that draw on theoretical insights about what is actually being overwritten and why.

Explainable AI — When the Model Has to Justify Itself

A neural network that classifies a medical image as malignant gives you an answer, not a reason. In many application domains — medicine, credit, hiring, criminal justice — the reason matters as much as the answer. Regulators increasingly require explanations. Practitioners need to debug models. Researchers need to understand whether correct answers reflect genuine learned patterns or spurious correlations.

Explainable AI (XAI) is the field developing tools for this: methods that highlight which input features drove a decision (saliency maps, SHAP, LIME), methods that learn human-interpretable intermediate concepts, and methods that produce inherently interpretable models rather than post-hoc explanations of black-box ones. The debate between these approaches — is a post-hoc explanation of a black box actually an explanation, or just another approximation? — is one of the more interesting ongoing arguments in applied ML research.

Attend

Dates Wednesday 22 – Friday 24 April 2026
Venue Crowne Plaza Bruges, Burg 10, 8000 Brugge
Format Hybrid — in-person & online
Scale 100+ speakers  ·  3 full days  ·  single track
From Leuven ±45 min by train
Conference website ›

Organised by

UCLouvain — ICTEAM Institute

Université catholique de Louvain, through its ICTEAM (Institute of Information and Communication Technologies, Electronics and Applied Mathematics). Home institution of conference chair Michel Verleysen, and the long-standing editorial home of ESANN since its founding. Based in Louvain-la-Neuve, Belgium.

uclouvain.be

KU Leuven

KU Leuven is a founding sponsor and consistent institutional partner of ESANN, with multiple KU Leuven researchers on the steering committee including Johan Suykens (ESAT-STADIUS) and Marc Van Hulle. The Machine Learning Group at KU Leuven is among the most active in the ESANN research community.

Leuven.AI