THE LOCAL STACK
(Local) AI  ·  Leuven
events  ·  meetups  ·  talks  ·  conferences

Belgium AI, ML & Computer Vision Meetup

April 2026 — Async Agents, Edge AI, and Enterprise Scale
2 April 2026  ·  18:00 – 20:00 CEST  ·  Belgium AI ML & CV Meetup  ·  Online  ·  Free

About the Event

The Belgium AI, ML & Computer Vision Meetup is one of the most active applied AI communities in Belgium and beyond — part of an international network of 48 groups with over 450 attendees at each edition. The April 2026 session brings together four speakers from industry and research for a sharp two-hour programme on the problems practitioners are actually hitting right now.

Four talks, four angles: building AI agents that run for hours without breaking, shipping visual AI to hardware at the edge, cleaning up the evaluation datasets everyone trusts but no one audits, and wiring together multi-agent systems that real organisations actually rely on. If you work somewhere between a Jupyter notebook and production, this one is worth the two hours.

Hosted by Jimmy Guerrero and organised by the Voxel51 team in collaboration with the Belgium chapter of the meetup network.

Programme

  • 18:00 Welcome & introductions — Jimmy Guerrero
  • 18:10 Async Agents in Production: Failure Modes and Fixes — Meryem Arik (Doubleword)
  • 18:35 Visual AI at the Edge: Beyond the Model — David Moser (Orella Vision)
  • 19:00 Sanitizing Evaluation Datasets: From Detection to Correction — Nick Lotz (Voxel51)
  • 19:25 Building Enterprise Agentic Systems That Scale — Aman Sardana (Cisco)
  • 19:50 Q&A & wrap-up

Speakers

▸  Async Agents in Production: Failure Modes and Fixes

Meryem Arik

Co-founder & CEO — Doubleword  ·  Oxford University (Theoretical Physics & Philosophy)

Meryem Arik is the co-founder and CEO of Doubleword, an async inference platform built for background AI workloads — the kind that run overnight, process millions of tokens, and cannot afford to block a user session. Doubleword offers up to 62% cost reduction on LLM inference by trading real-time latency for asynchronous delivery with SLA-backed guarantees.

Her talk focuses on a class of AI systems most teams hit only after their initial agent demo works: long-running, asynchronous agents — deep research bots, browser agents, multi-step workflow executors. These systems fail differently from short-lived agents. Early mistakes compound across dozens of tool calls. Token costs balloon unpredictably through extended reasoning chains. Patterns that work perfectly in a request-response loop become liabilities when the agent runs for 40 minutes.

Meryem studied theoretical physics and philosophy at the University of Oxford. She is a TEDx speaker and has spoken at QCon conferences four times with consistently high ratings. She was named to the Forbes 30 Under 30 list for her work in AI infrastructure.

LLM Inference Async Agents AI Infrastructure Production AI Cost Optimisation
▸  Visual AI at the Edge: Beyond the Model

David Moser

Co-Founder & Founding Engineer — Orella Vision

David Moser is the co-founder of Orella Vision, where he builds visual AI systems designed for autonomy at the edge — production-grade deployments that run on constrained hardware in real-world environments, not in a data centre. His track record includes safety-critical visual AI in fields where models failing is not an option.

His talk addresses a familiar frustration: a computer vision model that works beautifully in the lab stops working the moment it leaves the building. Edge deployment is not just model compression and quantisation — it is a complete shift in what the problem actually is. The gap between a successful demo and a reliable field system involves sensor variability, hardware constraints, software integration, operational monitoring, and failure modes that no amount of benchmark accuracy can predict.

David’s goal for attendees: a clear mental model for approaching edge vision projects that accounts for the full system, not just the model accuracy number.

Edge AI Computer Vision Visual AI Autonomy Embedded Systems
▸  Sanitizing Evaluation Datasets: From Detection to Correction

Nick Lotz

Community Engineer — Voxel51

Nick Lotz works on the community team at Voxel51, the company behind FiftyOne — the open-source toolkit for building high-quality datasets and evaluating computer vision models (10,500+ GitHub stars, Apache-2.0). His focus is open-source infrastructure and helping practitioners get more from their tooling.

His talk tackles a problem everyone acknowledges and almost nobody fixes: evaluation datasets contain label noise. Annotation errors creep into gold-standard test sets through the same human processes that produced the training data, but usually no one audits them because the engineering friction is too high. Models get benchmarked against corrupted ground truth. Leaderboards mislead. Improvements go undetected.

Nick will demonstrate a workflow that bridges the gap between detecting label errors algorithmically and actually fixing them — inspecting discordant labels and correcting them in-situ, moving toward a fully trusted end-to-end evaluation pipeline without the manual overhead that normally makes this impractical.

Data Quality Dataset Curation Model Evaluation Label Noise FiftyOne Open Source
▸  Building Enterprise Agentic Systems That Scale

Aman Sardana

Senior Engineering Architect — Cisco

Aman Sardana leads the design and deployment of enterprise AI systems at Cisco that blend large language models, data infrastructure, and customer experience for high-stakes, real-world problems. He is also an open-source contributor and mentor, focused on moving teams from AI experimentation to reliable agentic applications in production.

His talk is grounded in a system he built and runs: a multi-agent AI platform used daily by over 2,000 Cisco sellers. The path from a convincing demo to something 2,000 people depend on to do their jobs involves a different set of engineering challenges — multi-agent orchestration for genuinely complex workflows, personalisation features that drive actual adoption, and the enterprise foundations (security, auditability, reliability) needed to earn user trust at that scale.

Attendees will leave with an architecture and set of patterns that have been stress-tested at real enterprise scale — not a proof of concept.

Multi-Agent Systems Enterprise AI LLM Orchestration Production AI AI Architecture

Topics

Async Agents ↗

Most AI agents are request-response: the user asks something, the model answers, done. Async agents are different — they are autonomous systems that run for minutes or hours, executing multi-step workflows without a human in the loop at each step. A deep-research agent that pulls and synthesises dozens of sources. A browser agent that navigates a complex process end-to-end. A nightly ETL pipeline driven by language model calls. These are genuinely useful, but they fail in new ways: an error at step 3 compounds through steps 4, 5, and 6 before anyone notices. Token costs spiral as the model reasons through increasingly confused context. Retry logic that works for a 2-second call becomes a runaway expense over 40 minutes. Building these systems well requires thinking about failure modes, cost bounds, and recovery strategies that short-lived agents never need.

Edge AI ↗

Edge AI means running inference on or near the hardware where data is generated — an industrial camera, an autonomous vehicle, a drone, a door security system — rather than sending data to a cloud server. The appeal is real: lower latency (decisions in milliseconds, not seconds), reduced bandwidth costs, better privacy (raw video never leaves the device), and the ability to operate offline. The reality is harder. Edge hardware has strict power, memory, and compute constraints. Models trained in a data centre on clean, well-lit data will encounter motion blur, weather, sensor drift, and lighting conditions in the field that were never in the training distribution. Deployment means integration with embedded systems, firmware update cycles, and operational monitoring in environments where rebooting is not straightforward. The model is often the smallest part of the problem.

Label Noise and Dataset Quality ↗

Every machine learning benchmark relies on a test set assumed to be correctly labelled. In practice, all large annotated datasets contain errors — objects missed, bounding boxes wrong, classes confused by tired annotators. When the evaluation set has label noise, model comparisons become unreliable. A model that genuinely improves might score lower than a model that overfits to annotation artifacts. Researchers have known this for years; fixing it is a different matter. Systematic auditing requires tooling to surface likely errors, a workflow to inspect and correct them, and enough confidence in the process to actually change labels in a set that teams have built on top of. The problem is not theoretical: studies have found error rates of 3–10% in widely used benchmark datasets.

Multi-Agent Systems ↗

A multi-agent system is an architecture where multiple AI agents — each with a distinct role, set of tools, or knowledge domain — collaborate to complete tasks too complex for a single model to handle well. One agent plans, another searches, another writes, another reviews. This division of labour can dramatically improve output quality and reliability on complex workflows. The engineering trade-offs are significant: inter-agent communication overhead, orchestration logic that must handle agent failure gracefully, the risk of one agent's errors propagating into another's context, and the difficulty of debugging a system where the locus of any given decision is spread across multiple model calls and memory stores. For enterprise deployments, add: auditing, access control, cost attribution, and the need for individual users to actually trust and adopt the system.

Event Details

Date: Thursday, 2 April 2026

Time: 18:00 – 20:00 CEST (09:00 – 11:00 Pacific)

Location: Online — Zoom (link visible after registration)

Format: Network event — 48 groups, 450+ attendees

Organiser: Belgium AI ML & CV Meetup / Voxel51

Event type: Meetup

Target audience: Practitioners in AI, ML, computer vision, data engineering, and software development working with or building toward production AI systems.

Attend

Price Free
Format Online — Zoom
Registration Required — Zoom link sent on signup
↗  Register on Meetup

Organised by

Belgium AI ML & CV Meetup

One of Belgium’s most active applied AI communities and part of a global network of 48 AI, ML, and computer vision meetup groups. The Belgium chapter covers machine learning, computer vision, and practical AI — talks that go beyond slides into the engineering reality. Hosted by Jimmy Guerrero.

↗ meetup.com/belgium-ai-machine-learning

Voxel51

The company behind FiftyOne, the open-source toolkit for building high-quality datasets and evaluating computer vision and AI models (10,500+ GitHub stars). Voxel51 runs this global meetup network as part of its developer community programme. Their tooling sits at the intersection of data quality, model evaluation, and visual AI.

↗ voxel51.com

Resources