Did the Models Understand Documents? Benchmarking Models for Language Understanding in Document-Level Relation Extraction
Abstract
We take the first step toward understanding model decision rules in DocRE. Through annotations and RE-specific attacks, we reveal that SOTA models exhibit different decision rules from humans, severely damaging robustness. We introduce MAP to evaluate understanding and reasoning capabilities.
Type
Publication
In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023)

Authors
Haotian Chen
(he/him)
Assistant Researcher
Haotian Chen is an Assistant Researcher at the School of Artificial Intelligence, Shanghai Jiao Tong University, working with Prof. Junchi Yan at RethinkLab. His research goal is to understand and develop AI for automating tasks that require extensive time, effort, and creative thinking. He works on automating data-driven scientific research, contributing to both alleviating the burden on humans and revolutionizing human productivity. His research focuses on Autonomous Agents, Large Language Models, and AI4Research. He received his PhD in Data Science from Fudan University and completed postdoctoral research at Tsinghua University (THUNLP), where he worked with Prof. Zhiyuan Liu and Prof. Maosong Sun. He was also a research intern at Microsoft Research Asia, where the RD-Agent project he co-developed was featured in the Microsoft Build 2025 Global Keynote.