
About
I am a Ph.D. student in computer science at the University of Massachusetts Amherst, advised by Amir Houmansadr.
My research centers on the privacy and security of AI models and agentic systems. I am particularly interested in understanding and mitigating vulnerabilities in multimodal systems, with recent work examining the reliability of models that process audio, text, and vision inputs. Alongside these efforts, I remain engaged with broader topics in fairness, interpretability, and responsible AI, aiming to develop methods that make AI systems not only powerful but also aligned with societal values.
Prior to my graduate studies, I earned my bachelor’s degree in computer engineering from the Hong Kong University of Science and Technology (HKUST) in the year 2023, where I completed my Final Year Thesis (FYT) on the topic of “Adversarial Attacks in Federated Learning” under the supervision of Jun Zhang. I have also worked with Minhao Cheng on the robustness of language models, specifically exploring methods associated with defense against backdoor attacks in language models.
I am currently doing a Summer Research Internship at Brave Software working on privacy and security of AI agents with Ali Shahin Shamsabadi.
💬 Office Hours
I’m happy to chat and advise on research (or projects), PhD applications, or anything else on your mind. Lately, I’ve been working on trustworthiness of AI agents and audio modality safety, but I’m always open to exploring new areas and directions you might bring.
Feel free to send me an email to schedule an office hour!
📣 News
July 11 '25: I gave a talk at the NFM Reading Group led by the Speech Technologies Group at Google DeepMind on our Multilingual and Multi-Accent Jailbreaking of Audio LLMs paper. View [slides]
🎉 July 7 '25: Our Multilingual and Multi-Accent Jailbreaking of Audio LLMs paper has been accepted to COLM (Conference on Language Modeling) 2025!
Jun 16 '25: I will be working as a Summer Research Intern at Brave Software under the supervision of Dr. Ali Shahin Shamsabadi on privacy and security of AI Agents.
🎉 Jun 11 '25: Our Backdooring Bias into Text-to-Image Models paper has been accepted to USENIX Security '25!
🎉 Sep 27 '24: Our OSLO paper has been accepted to NeurIPS '24!
🎉 Dec 22 '23: Our Memory Triggers paper has been accepted to AAAI '23 PPAI Workshop!
🖨️ Preprint / Publications
Preprint





intended.png)
2025


2024


2023


