Machine learning systems must assume that data, models, or even the training process itself can be exposed to adversarial attacks. MADLab researchers are tackling the following crucial questions: How can an adversary attach ML systems? Can we reliably detect such attacks? Can attacks be defended with formal guarantees?