Rethinking the Development of Large Language Models from the Causal Perspective: A Legal Text Prediction Case Study
Abstract
While large language models (LLMs) exhibit impressive performance on a wide range of NLP tasks, most of them fail to learn the causality from correlation. We propose a causality-aware self-attention mechanism (CASAM) and eight kinds of legal-specific attacks for evaluation. Experimental results demonstrate CASAM achieves SOTA performances and the strongest robustness on three legal text prediction benchmarks.
Type
Publication
In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2024)

Authors
Haotian Chen
(he/him)
Assistant Researcher
Haotian Chen is an Assistant Researcher at the School of Artificial Intelligence, Shanghai Jiao Tong University, working with Prof. Junchi Yan at RethinkLab. His research goal is to understand and develop AI for automating tasks that require extensive time, effort, and creative thinking. He works on automating data-driven scientific research, contributing to both alleviating the burden on humans and revolutionizing human productivity. His research focuses on Autonomous Agents, Large Language Models, and AI4Research. He received his PhD in Data Science from Fudan University and completed postdoctoral research at Tsinghua University (THUNLP), where he worked with Prof. Zhiyuan Liu and Prof. Maosong Sun. He was also a research intern at Microsoft Research Asia, where the RD-Agent project he co-developed was featured in the Microsoft Build 2025 Global Keynote.