Model Fusion for Federated Learning across Heterogeneous Systems

Research Funding:

I and my academic collaborator, Patrick Jaillet (MIT), were awarded (as co-PI) an exploratory research grant (150K USD) to investigate new challenges in On-Device Personalization with Meta Learning by the MIT-IBM Watson AI Lab (2020-2021).

This research is also part of another research funding under the AI Singapore Research Programme: Toward Trustable Model-centric Sharing for Collaborative Machine Learning, S$8,401,002.40, Apr 2021 - Mar 2025 of which I am an affiliated collaborator working on the model fusion package.

Media Article

Idea and Motivation:

I am interested in the problem of meta or personalized federated learning in practical domains where production systems hosting analytic services often require generating warm-start solution models for emerging tasks with limited data. One potential approach to address this warm-start challenge is to adopt meta learning to generate a base model that can be adapted to solve unseen tasks with minimal fine-tuning. This however requires the training processes of previous solution models of existing tasks to be synchronized. This is not possible if these models were pre-trained separately on private data owned by different parties and cannot be synchronously re-trained.

To accommodate for such scenarios, my research aims to develop a new personalized learning framework that synthesizes customized models for unseen tasks via fusion of independently pre-trained models of related tasks. One potential direction to tackle this problem is to train local models separately and treat them as observations drawn from stochastic process that defines the behaviors of a latent global model. This results in formal meta-Bayesian learning frameworks that can infer (or synthesize) a global model given observations of those local models. I coined this approach model fusion.

Most recent work:

Effective Knowledge Representation and Utilization for Sustainable Collaborative Learning across Heterogeneous Systems (AI Magazine, 2024) Paper

Probabilistic Federated Prompt-Tuning (NeurIPS-24) Paper

Few-Shot Learning via Repurposing Ensemble of Black-Box Models (AAAI-24) Paper

Federated Learning of Models Pre-Trained on Different Features with Consensus Graphs (UAI-23) Paper

Personalized Federated Domain Adaptation for Item-to-Item Recommendation (UAI-23) Paper

Bayesian Federated Estimation of Causal Effects from Observationsl Data (UAI-22) Paper

Adaptive Multi-Source Causal Inference from Observational Data (CIKM-22) Paper

Model Fusion for Personalized Learning (ICML-21) Paper

Learning Task-Agnostic Embedding of Multiple Black-Box Experts for Multi-Task Model Fusion (ICML-20) Paper

Preliminary works in this emerging area were published at AAAI-19, ICML-19:

Collective Online Learning via Decentralized Gaussian Processes in Massive Multi-Agent Systems (AAAI-19) Paper

Collective Model Fusion for Multiple Black-Box Experts (ICML-19) Paper

Related works in the context of neural networks (in collaboration with my MIT-IBM colleagues) at ICML-19 and NeurIPS-19:

Bayesian Nonparametric Federated Learning of Neural Networks (ICML-19) Paper

Statistical Model Aggregation via Parameter Matching (NeurIPS-19) Paper