Daiwei Chen
My research primarily focuses on LLM Alignment, including RLHF, pluralistic alignment, and LLM factuality. I am motivated by the growing gap between the impressive capabilities of modern LLMs and their misalignment with human values, cultural pluralism, and factual correctness. Current models often produce biased, hallucinated, or culturally narrow responses, limiting their reliability in real-world decision-making and raising safety concerns at scale.



