The team is researching into mathematical vulnerabilities exposed when aggregating these localized model updates. The team is looking into exploring cryptographic and algorithmic safeguards to protect the global model's internal architecture from exploitation during distributed training.
- Secure Gradient Aggregation: Researching cryptographic protocols to obfuscate local model updates and mathematically prevent gradient leakage.
- Defeating Inference Tactics: Investigating methods to prevent model inversion attacks from reconstructing proprietary localized training distributions.
- Byzantine-Robust Architectures: Designing resilient aggregation algorithms to filter malicious parameter updates and preserve global accuracy.
- Distributed Fine-Tuning Protection: Exploring specialized defense frameworks to protect the complex structures of foundation models during decentralized training.
