AffectLog’s research programme sits at the intersection of data privacy, compliance, policy and transparency. Each domain hub acts as a home for ongoing projects, datasets (where publicly shareable), methods and recent publications. This document sketches detailed outlines for those hubs with the shared mission of fostering collaboration and funding opportunities.
Privacy‑Preserving Intelligence
Short primer
Modern machine‑learning models thrive on data, yet organisations that work with minors, patients or citizens cannot legally centralise or re‑use sensitive records. The privacy‑preserving intelligence hub investigates approaches that let algorithms learn from protected information without exposing personally identifiable data. This work draws on differential privacy (DP), homomorphic encryption (HE), secure multiparty computation (SMPC) and trusted execution environments (TEEs). By embedding these techniques into the full analytics pipeline, we enable insights from healthcare, education and public‑sector datasets while maintaining confidentiality.
Current projects
1. Federated learning for children’s data. Inspired by UNICEF’s work on federated learning in education, we develop algorithms that run on school servers and return only aggregated model updates. UNICEF notes that federated approaches send the algorithm to each site rather than shipping data, and only aggregated updates are shared back to the central coordinator[1]. Governance‑by‑design mechanisms ensure that local rules and permissions are embedded from the outset[2].
2. Privacy‑enhancing toolkits for low‑resource environments. We build lightweight libraries for TEEs and SMPC that can run on commodity hardware. Edgeless Systems explains that fully homomorphic encryption (FHE) allows computation directly on encrypted data but remains computationally heavy; in contrast, confidential computing with TEEs encrypts data in memory and uses remote attestation, making it more practical for real deployments[3]. Our toolkits prioritise TEEs for everyday use while leaving hooks for FHE when security is paramount.
3. Differentially private mobility analytics. NIST explains that differential privacy adds random noise to query answers so that the output is almost indistinguishable regardless of whether an individual’s record is included[4]. We apply DP to mobility and location datasets, balancing utility and privacy.
4. Secure multiparty environmental modelling. GeeksforGeeks defines SMPC as a cryptographic technique where multiple parties compute a function without revealing their private inputs; data is encrypted and distributed across parties without a trusted third party[5][6]. We adapt these protocols to climate data, allowing regional agencies to jointly model pollution or weather patterns.
Datasets
Most projects rely on sensitive data and cannot be shared directly. However, we release synthetic datasets that mirror statistical properties of the originals and pre‑trained models for benchmarking. For example, our mobility project publishes DP‑sanitised location traces and evaluation scripts. We also provide code for generating synthetic education records with built‑in privacy controls.
Methods
· Differential Privacy (DP): Adds calibrated noise to outputs so results are virtually unchanged whether an individual is in the dataset or not[7].
· Homomorphic Encryption (HE): Enables computations on encrypted data; practical when combined with efficient schemes and batching[3].
· Secure Multiparty Computation (SMPC): Allows multiple parties to jointly compute a function without revealing inputs[5][6].
· Trusted Execution Environments (TEEs): Hardware‑based enclaves that keep data encrypted in memory and provide remote attestation; more practical than pure FHE for many use cases[8].
· Zero‑Knowledge Proofs (ZKPs): Permit one party to prove a statement (e.g., a model complies with a policy) without revealing underlying data[9]. These proofs can verify privacy budgets or compliance credentials.
Abstracts of recent publications/posts
1. “Federated learning for children’s data” – UNICEF Innocenti. This article describes how algorithms are dispatched to school servers and learn from local data while sharing only aggregated updates[1]. It emphasises governance‑by‑design: embedding clear policies, transparency and accountability into the federated system[2]. The paper also highlights privacy‑enhancing technologies such as DP and SMPC as essential for protecting children’s rights[10].
2. “Introduction to Privacy‑Preserving Computing” – Edgeless Systems. This blog outlines privacy technologies including FHE, confidential computing, SMPC and DP. It notes that FHE allows computation on encrypted data but is computationally heavy, whereas TEEs keep data encrypted in memory and enable remote attestation[3]. SMPC permits joint computation without revealing inputs, and DP adds noise to protect individuals[8].
3. “Secure Multiparty Computation – Concept and Applications” – GeeksforGeeks. The article explains that SMPC distributes encrypted data across parties, enabling them to compute a function without disclosing individual inputs and without relying on a trusted third party[5][6].
4. “Differential Privacy for Data Analysis” – NIST. This blog post discusses differential privacy as a method that adds random noise to query results; the resulting output remains approximately the same whether an individual’s data is included or not[4]. The article emphasises compositionality and the trade‑off between accuracy and privacy.
5. “Homomorphic Encryption Enabling Privacy‑Preserving Data Insight” – CloudThat. The post (not fully accessible in this context) introduces homomorphic encryption as a technique that allows computation on encrypted data, enabling analysis without decryption. It is poised to transform fields from finance to healthcare by keeping data encrypted throughout the analytic pipeline.
Call to action
AffectLog invites researchers and practitioners to collaborate on privacy‑preserving intelligence. We welcome partnerships on grant proposals, contributions to our open‑source toolkits and participation in our secure sandbox for testing algorithms on synthetic or de‑identified datasets. Contact us to explore joint publications or to apply for our community grants.
Federated, Compliance‑Aware AI
Short primer
AI models that perform credit scoring, healthcare triage or public decision‑making are increasingly regulated. The federated, compliance‑aware AI hub explores techniques for training and validating models under strict legal regimes such as the EU AI Act, GDPR and ISO/IEC 42001. We combine federated learning (keeping data on‑premises) with automated compliance checks and auditing so that high‑risk AI systems remain accurate and lawful.
Current projects
1. Cross‑border financial model validation. Banks across multiple EU countries jointly validate credit scoring and fraud models using federated learning. Only encrypted model updates are shared; no raw transaction data leaves the bank’s perimeter. This design meets GDPR’s data localisation and minimisation principles and aligns with EU AI Act high‑risk obligations for credit scoring.
2. Healthcare federated analytics. Hospitals in France and Germany train a diagnostic model using local patient data. The EU AI Act classifies medical AI as high‑risk and requires robust risk management, data governance, technical documentation and human oversight[11]. Federated learning avoids centralising sensitive records, while an automated compliance layer tracks evidence for auditors.
3. Education risk scoring. Schools experiment with predictive models for early drop‑out detection. ISO/IEC 42001, an emerging AI management standard, specifies requirements for establishing, implementing and maintaining an AI management system; it addresses ethical considerations, transparency and risk management[12]. Our tools integrate these controls into the training pipeline.
Datasets
The hub uses partially synthetic financial and healthcare datasets derived from consenting partners. We also publish anonymised evaluation sets and challenge tasks for researchers to test federated algorithms. Each dataset includes metadata outlining applicable legal constraints.
Methods
· Federated learning orchestration: A central server distributes a global model to each site; local nodes train on their own data and send encrypted updates back for aggregation. Milvus explains that secure aggregation protocols (e.g., homomorphic encryption or SMPC) prevent tracing contributions and ensure that no raw data leaves client devices[13].
· Compliance mapping: Our RegLogic engine codifies hundreds of clauses from the EU AI Act, GDPR and ISO/IEC 42001. It automatically checks that models have risk management systems, data governance strategies and technical documentation[11][12].
· Automated evidence packs: Pre‑built templates generate Annex IV technical documentation for high‑risk systems and map model metrics to legal requirements.
· Cross‑domain privacy controls: The same platform supports DP, HE, SMPC and TEEs to align with local data‑protection laws.
Abstracts of recent publications/posts
1. “Artificial Intelligence Act – High‑Risk Obligations” – ArtificialIntelligenceAct.eu. The summary explains that providers of high‑risk AI systems must implement a risk‑management system, ensure data governance and quality, create detailed technical documentation, maintain records, enable human oversight and ensure accuracy, robustness and cybersecurity[11].
2. “ISO/IEC 42001 AI Management Systems – Requirements” – ISO. ISO describes the first international standard for AI management systems, specifying requirements for establishing, implementing, maintaining and continually improving an AI management system. It covers ethical considerations, transparency and risk management[12].
3. “Understanding ISO 42001: The World’s First AI Management System Standard” – A‑LIGN. The article notes that ISO 42001 emphasises transparency, accountability, bias identification and mitigation, safety and privacy; it sets themes around leadership, planning, operation, performance evaluation and continual improvement[14].
4. “Federated Learning and Privacy Preserving AI” – Milvus. This piece explains that federated learning sends a central model to client devices, where local training occurs, and only encrypted updates are returned. Secure aggregation via SMPC or homomorphic encryption prevents tracing contributions, and differential privacy adds noise to further reduce privacy risk[13].
5. “AI and Risk Management Frameworks – Practical Guide” – CognitiveView. The guide summarises NIST’s AI RMF, describing how organisations should map, measure and manage AI risks by using risk heatmaps, fairness and explainability metrics, bias audits and human oversight[15].
Call to action
We invite regulators, compliance teams and AI researchers to test our federated compliance sandbox. Organisations can run their models on private data while our platform automatically generates evidence packs and alerts to ensure readiness for EU AI Act, GDPR and ISO/IEC 42001 audits. Contact us to collaborate on grants or contribute to the RegLogic clause library.
Policy‑Linked Governance
Short primer
Even the most advanced AI models will fail to inspire trust without enforceable policies. The policy‑linked governance hub explores policy‑as‑code and self‑sovereign governance: embedding laws, ethical principles and contractual obligations directly into AI workflows. By representing policies as code and anchoring decisions to immutable audit logs, we ensure that AI systems can prove compliance without revealing sensitive details.
Current projects
1. OPA‑powered governance engine. We integrate Open Policy Agent (OPA) into MLOps pipelines. A Principled Evolution blog argues that policy‑as‑code allows organisations to define, manage and automatically enforce governance rules using code; OPA decouples policy logic from application code and enables centralised management, automated enforcement and enhanced transparency[16].
2. Self‑sovereign audit ledgers. We build append‑only ledgers using confidential computing. Microsoft Azure’s confidential ledger provides immutability and tamper‑proof data; it runs on secure enclaves and uses a blockchain‑based consensus to ensure data integrity[17][18]. Our ledger records policies, consent decisions and model actions and supports zero‑knowledge proofs so external auditors can verify compliance without accessing raw data.
3. Dynamic consent management. We develop policy smart contracts that enforce consent conditions automatically. For example, the ledger denies a new use of educational data if parental consent does not permit it. Changes in policies automatically trigger revocations.
4. Interoperability across jurisdictions. Our RegLogic DSL includes clauses from the EU AI Act, GDPR, ISO/IEC 42001 and national laws. This enables cross‑border projects to adopt a single policy engine; users can layer national rules on top of EU‑wide standards.
Datasets
The hub stores minimal data; most information exists as hashes or encrypted pointers. Publicly shareable resources include example policy files written in Rego and sample audit logs demonstrating zero‑knowledge proof verifications.
Methods
· Policy‑as‑code: Policies are represented in a machine‑readable language (e.g., Rego) and enforced automatically within CI/CD pipelines. OPA allows centralised management, automated enforcement and transparency[16].
· Immutable ledgers: Append‑only ledgers, often built on blockchain with secure enclaves, provide tamper‑proof, auditable records[17][18].
· Zero‑knowledge proofs: Provers can demonstrate compliance or data possession without revealing secrets[9].
· Dynamic policy layering: RegLogic compiles multiple regulatory clauses into executable policies. This supports hierarchical overrides and jurisdiction‑specific rules.
Abstracts of recent publications/posts
1. “Policy‑as‑Code: Leveraging OPA for Trustworthy AI” – Principled Evolution. The article notes that AI governance requires robust, automated enforcement of ethical principles and regulations. Policy‑as‑code, implemented via Open Policy Agent (OPA), decouples policy logic from application code and enables centralised management, automation and transparency[16]. The authors introduce AICertify and Gopal libraries to translate regulations into executable rules and illustrate fairness and content‑safety scenarios[19].
2. “Azure Confidential Ledger: Immutability and Tamper‑Proof Records” – Microsoft Learn. The article explains that Azure’s confidential ledger provides immutable, tamper‑proof storage by keeping data in an append‑only ledger. It runs on secure enclaves and uses a blockchain‑based consensus for integrity[17][18]. This architecture is well‑suited for recording consent, policy enforcement and AI audit events.
3. “Ethos AI – Designing Risk Controls” – Ethos AI. A whitepaper notes that effective AI risk management requires a combination of design‑time and run‑time controls. Detective controls include drift detection, performance monitoring and anomaly detection; response controls include model rollbacks and human override procedures[20]. These controls can be encoded as policies and enforced automatically.
4. “AI Governance Frameworks: Opening the Black Box” – Precisely. The blog discusses the need for continuous transparency and explainability; organisations must adopt explainability tools that surface decision pathways and performance metrics in real time[21]. Continuous monitoring of training and validation datasets is essential for detecting “bias drift,” and governance frameworks must define processes for profiling datasets, applying fairness metrics and triggering remediation[22]. Automated risk scoring and compliance workflows accelerate governance while standardised templates and cross‑functional collaboration foster consistency[23].
5. “Zero‑Knowledge Proofs Keep Transactions Private” – Debut Infotech. The post introduces zero‑knowledge proofs, where a prover convinces a verifier that a statement is true without revealing any additional information[9]. ZKPs can verify regulatory compliance (e.g., confirming that a model meets fairness thresholds) without exposing underlying data[24].
Call to action
We encourage policy makers, regulators and open‑source developers to join us in co‑creating policy‑as‑code libraries and self‑sovereign governance tools. Organisations can deploy our ledger in their infrastructure and contribute back policy modules. Grants are available for research into new zero‑knowledge proof schemes and multi‑jurisdiction policy compilation.
Explainable, Adaptive Governance
Short primer
The behaviour of AI models evolves as data drifts, algorithms update and new contexts emerge. The explainable, adaptive governance hub focuses on continuous oversight and transparent decision‑making. We research explainability methods (SHAP, counterfactuals, causal graphs), fairness auditing, drift detection and dynamic response actions such as retraining or human review. The goal is to turn governance from a one‑time checklist into an ongoing, adaptive process.
Current projects
1. Real‑time drift and bias monitoring. We build dashboards that track model performance, data quality and fairness metrics. Precisely emphasises that continuous monitoring of training and validation datasets is essential for detecting “bias drift,” and governance frameworks must define processes for profiling datasets, applying fairness metrics and triggering remediation workflows[21].
2. Explainability toolchain. Our pipelines compute SHAP values, local surrogate explanations and counterfactual examples at both training and inference time. CognitiveView’s guide to NIST AI RMF recommends using explainability tools like SHAP, LIME and counterfactual explanations and automating compliance tracking through AI risk dashboards[25].
3. Adaptive retraining and rollback. Based on the Ethos AI paper, we implement detective controls (drift detection, performance monitoring) and response controls (model rollbacks, human override procedures) to mitigate risk when models deviate from acceptable behaviour[20].
4. Cross‑stakeholder engagement. We host workshops to educate data scientists, compliance officers and ethicists on how to interpret explanations. The Precisely blog underscores the importance of cross‑functional collaboration and clear workflows to avoid accountability gaps[26].
Datasets
While many experiments rely on internal datasets, we publish benchmark datasets with known drift patterns and bias scenarios. Each comes with baseline models and evaluation scripts to test explainability and adaptive governance tools.
Methods
· Explainability: SHAP, LIME, Grad‑CAM and other techniques provide feature attributions and visualisations. Counterfactual explanations reveal minimal changes needed to alter a prediction, and causal graphs help distinguish correlation from causation.
· Fairness metrics: We evaluate demographic parity, equalised odds and equal opportunity, as well as error rate differences across groups. NIST AI RMF recommends measuring fairness, accuracy, explainability and robustness[27].
· Drift detection: Statistical tests monitor changes in data distributions and model outputs. Precisely notes that continuous monitoring is essential to detect bias drift and changing performance[21].
· Automated risk dashboards: Risk heatmaps and dashboards visualise high‑risk models and surface bias and transparency issues; these tools automate compliance tracking[28][25].
· Human‑in‑the‑loop: For high‑impact decisions, human reviewers can override or approve model outputs. NIST AI RMF emphasises establishing human oversight mechanisms and incident response plans[29].
Abstracts of recent publications/posts
1. “AI Governance Frameworks: Cutting Through the Chaos” – Precisely. The article highlights that AI governance extends beyond traditional data governance, covering design, training, validation, deployment and continuous monitoring[30]. Continuous transparency and explainability are critical; organisations must adopt explainability tools that surface contributions and decision pathways in real time[21]. Continuous monitoring of training and validation datasets is essential for detecting bias drift and triggering remediation workflows[22].
2. “NIST AI Risk Management Framework – Practical Guide” – CognitiveView. The guide breaks down the NIST AI RMF’s four functions: Govern, Map, Measure and Manage. It recommends using AI risk heatmaps, fairness, accuracy and explainability metrics, bias audits, adversarial testing and human‑in‑the‑loop oversight[29][31]. Automated dashboards should track drift and compliance[25].
3. “Designing Risk Controls for Adaptive AI” – Ethos AI. The paper discusses the need for design‑time preventive controls and run‑time detective and response controls; examples include drift detection, performance monitoring and model rollbacks or human override procedures[20].
4. “Policy‑as‑Code in Action: AI Governance Scenarios” – Principled Evolution. This section of the article illustrates how fairness thresholds and content‑safety policies can be enforced automatically via OPA. For example, a Gopal policy might require a disparate impact ratio above a threshold, and OPA flags non‑compliance[32].
5. “AI Governance and Model Management Practices” – A Practitioner’s blog (hypothetical). This upcoming post will share lessons learned from deploying our drift and explainability dashboards in production. It will include examples of retrieving SHAP summaries, applying counterfactual explanations and retraining a model when fairness metrics drop below thresholds.
Call to action
We welcome researchers, developers and regulators to join our explainable governance sandbox. Contributors can test new explainability methods on our benchmark datasets, propose fairness metrics or experiment with drift detectors. We also offer grants for projects that integrate human‑in‑the‑loop oversight into adaptive AI systems. Contact us to collaborate or schedule a demo.
[1] [2] [10] Federated learning for children’s data | Innocenti Global Office of Research and Foresight
[3] [8] Confidential computing and multi-party computation (MPC)
[4] Differential Privacy for Privacy-Preserving Data Analysis: An Introduction to our Blog Series | NIST
[5] [6] What is Secure Multiparty Computation? – GeeksforGeeks
[7] Differential Privacy | Privacy-Enhancing Technologies PETs
[9] [24] How Zero-Knowledge Proofs Secure Blockchain Privacy
[11] High-level summary of the AI Act | EU Artificial Intelligence Act
[12] ISO/IEC 42001:2023 – AI management systems
[13] How does federated learning ensure data remains on the client device?
[14] Understanding ISO 42001
[15] [25] [27] [28] [29] [31] Understanding NIST’s AI Risk Management Framework: A Practical Guide
[16] [19] [32] Principled AI Governance with Policy-as-Code: Leveraging OPA for Trustworthy AI
[17] [18] Microsoft Azure confidential ledger overview | Microsoft Learn
[20] Choosing the right controls for AI risks
[21] [22] [23] [26] [30] AI Governance Frameworks: Cutting Through the Chaos