I'm a first-year PhD student at Harvard SEAS. My research focuses on developing robust, trustworthy tools for LLM alignment.

Education

2024—Present

Harvard University

Ph.D. in Engineering Sciences

Advisor: Prof. Flavio Calmon

2020—2024

American University of Beirut

B.S. in Statistics and B.E. in Computer Engineering

Advisor: Prof. Ibrahim Abou Faycal

Experience

Summer 2023

Research Intern Economics Department at Harvard

Advisor: Prof. Elie Tamer

Developed tools for counterfactual estimation in binary games.

Questions I'm Thinking About

Pairwise Preferences and Human Values

Can pairwise methods effectively capture subtle and diverse human values?

Reward Models as Auditing Tools

How can reward models serve as transparent and robust AI auditing mechanisms?

Scalable Oversight with Targeted Feedback

How can targeted human feedback guide LLMs towards better alignment?

Alignment in Multi-Agent Systems

How can we maintain aligned behavior among multiple AI agents with conflicting goals?