We team is studying the theoretical limits of model robustness by analyzing the high-dimensional geometry of deep learning architectures. The goal is to conceptualize provable defenses that harden neural networks against evasion tactics and data poisoning.
- Evasion Attack Mitigation: Investigating robust classification mechanisms to defend against imperceptible input perturbations.
- Data Poisoning Resilience: Exploring sanitization algorithms to prevent malicious data from corrupting learned weights during the training phase.
- Provable Model Robustness: Researching mathematical bounds and certification methods to guarantee neural network stability against adversarial manipulation.
- Adversarial Representation Learning: Analyzing feature extraction architectures to naturally map inputs into generalized, attack-resistant latent spaces.
