Adversarial Machine learning is a field of research lying at the intersection of Machine Learning and Security, which studies vulnerabilities of Machine learning models that make them susceptible to attacks. The attacks are inflicted by carefully designing a perturbed input which appears benign, but fools the models to perform in unexpected ways.
To date, most work in adversarial attacks and defenses has been done for classification models. However, generative models are susceptible to attacks as well, and thus warrant attention. We study some attacks for generative models like Autoencoders and Variational Autoencoders. We discuss the relative effectiveness of the attack methods, and explore some simple defense methods against the attacks.