Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
publications
FCDS: Fusing Constituency and Dependency Syntax into Document-Level Relation Extraction
Published in LREC-COLING 2024, 2024
We introduce FCDS, a document-level relation extraction model that fuses constituency and dependency syntax. By combining sentence-level aggregation from constituency trees with dependency-based graph reasoning, FCDS better captures cross-sentence relations between entities. Experiments across multiple domains show significant performance gains, highlighting the effectiveness of integrating both syntactic views.
Recommended citation: Xudong Zhu, Zhao Kang, and Bei Hui. 2024. FCDS: Fusing Constituency and Dependency Syntax into Document-Level Relation Extraction. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7141–7152, Torino, Italia. ELRA and ICCL.
Download Paper
From Emergence to Control: Probing and Modulating Self-Reflection in Language Models
Published in arxiv preprint, 2025
We study the emergence and control of self-reflection in large language models. Our probing method reveals that pretrained models already contain a latent capacity for reflection, which can be amplified without additional training. By identifying and manipulating a “self-reflection vector” in activation space, we achieve bidirectional control over reflective behavior, improving reasoning accuracy or reducing computation as needed. This work deepens understanding of self-reflection and demonstrates how model internals can enable precise behavioral modulation.
Recommended citation: Zhu X, Jiang J, Khalili M M, et al. From Emergence to Control: Probing and Modulating Self-Reflection in Language Models[J]. arXiv preprint arXiv:2506.12217, 2025.
Download Paper
Alleviating subgraph-induced oversmoothing in link prediction via coarse graining
Published in Neurocomputing, 2025, 2025
We address the oversmoothing problem in link prediction caused by repetitive high-degree nodes across subgraphs. Our method introduces a coarse-graining strategy that merges strongly correlated nodes, yielding more diverse receptive fields and reducing subgraph size. This not only mitigates oversmoothing but also improves scalability and efficiency of GNN-based link prediction.
Recommended citation: Zhu X, Hao D, Gao Z, et al. Alleviating subgraph-induced oversmoothing in link prediction via coarse graining[J]. Neurocomputing, 2025: 130666.
Download Paper