Skip to main content

People & practice

The team behind explainable reasoning systems for institutions.

AdminLab.ai brings together public-administration practice, legal reasoning and AI engineering delivering explainable decision flows for HR, policy and compliance in institutional settings.

Explainability-first Public administration Structured reasoning Institutional trust

Founding Story

AdminLab.ai was founded to support traceable and fair administrative decision-making in practice.

AdminLab.ai was initiated and funded by Mike, combining years of experience around public-sector operations and transformation projects with a simple observation: complex administrative rules are often applied in opaque, inconsistent ways.

The goal was not to build another black-box model, but to develop a structured reasoning engine that institutions can inspect, audit and improve. The system is built at the intersection of administrative science, legal reasoning and structured AI engineering.

Founding team

People shaping the core framework.

The founding team blends administrative practice, institutional analysis and sytstems engineering. Together, they design reasoning models that can live comfortably in real institutions: transparent, repeatable and open to scrutiny.

Mike
Founder & Funder

Leads the strategic direction and funding of AdminLab.ai. Brings experience from initiatives at the intersection of public-sector transformation, compliance and institutional technology adoption.

Focused on aligning product decisions with institutional realities: budgets, governance, and the operational constraints under which public administration teams actually work.

Yvonne
Administrative Science & Policy

Works at the interface of administrative science, procedural fairness and policy design. Contributed work nominated for the European Ombudsman Award for Good Administration 2025/2026.

Responsible for framing models in line with administrative principles, ensuring each decision branch can be justified from a public-law and good-administration perspective.

AdminLab.ai Guru avatar
AdminLab.ai Guru
AI Engineering & Reasoning Models

Designs the structured reasoning architectures and implements logic flows designed for audit, reproducibility and institutional oversight. Focuses on making models deterministic, inspectable and runnable at the edge.

Responsible for translating administrative requirements into robust, testable decision trees and workflows, as well as for logging, validation and reproducibility of outputs.

analysis & collaboration

Extended contributors and collaborators.

Beyond the core, AdminLab.ai works with legal analysts, policy designers and technical reviewers to challenge assumptions, test resilience and support deployment readiness and system robustness.

Legal & Compliance Contributors
Case & policy analysis

Support the mapping of legal provisions and internal rules into structured, machine-readable decision criteria, paying particular attention to edge cases and exception handling.

Research Partners
Methodology & evaluation

Academic and practice-based collaborators who stress-test reasoning chains, contribute to evaluation frameworks and help align models with emerging standards in explainable AI.

Operational & Product Support
Delivery & iteration

Coordinates implementation with institutional partners, manages feedback loops and ensures that improvements move from prototype to operational use in a controlled manner.

Methodology

Explainability-first reasoning models.

AdminLab.ai follows a structured approach that treats explainability, reproducibility and fairness as primary design constraints, not afterthoughts. Models are built so that HR, policy and compliance teams can audit, challenge and improve them.

01 · Structured decision logic

From narrative rules to explicit steps

We translate policies, legal texts and internal guidelines into explicit decision steps with clear conditions, thresholds and references. Each branch is labelled and documented.

02 · Deterministic behaviour

Same inputs, same outputs

Reasoning flows are designed to be deterministic. The same inputs produce the same output, enabling consistent application of rules and reliable re-analysis of past decisions.

03 · Full logging

Decisions come with a trace

For each decision, we log which steps were evaluated, which conditions were met, and which policy references were applied. This creates an audit trail for oversight bodies.

04 · Institutional alignment

Co-designed with practitioners

Models are iterated with HR, compliance and policy teams to ensure they reflect real workflows, local practices and the institutional understanding of fairness and proportionality.

Background & milestones

How the work has evolved?

AdminLab.ai builds on prior work in public administration and institutional innovation, gradually moving from early concepts to operational reasoning systems.

Pre-2024
Administrative practice & research

Work in and around public administration, studying how rules, procedures and human judgement interact in complex institutional environments. Foundation in administrative science and legal reasoning.

2024
Founding of AdminLab.ai

Mike initiates and funds AdminLab.ai to explore structured, explainable reasoning models that can support HR, policy and compliance functions. Initial framework development begins.

2025
Recognition & early models

Related public-administration work is nominated for the European Ombudsman Award for Good Administration 2025/2026, while early reasoning models are tested with practitioners.

2026 →
From prototypes to institutional use

Focus shifts towards operational deployments: governance frameworks, suitability assessments and integration with existing institutional processes and infrastructure.

Governance & oversight

Independent challenge is a feature, not an obstacle.

AdminLab.ai is designed to operate under internal and external institutional scrutiny. We expect our reasoning chains to be reviewed by internal audit, legal services, staff representatives and, where appropriate, external oversight bodies.

  • Models are built so that every assumption can be surfaced and questioned.
  • Logs make it possible to reconstruct how a decision was reached.
  • Documentation ties rules back to their originating policies and legal texts.
  • Institutional partners remain in control of final decision-making.
  • Regular external reviews and stress-tests are built into our development cycle.

Interested in deploying this within your institution?

We welcome conversations with HR, policy, legal and compliance teams exploring explainable models for sensitive administrative decisions. Let's discuss your institutional context and requirements.

Start an institutional conversation →