Published: Feb 16, 2024
We are proud to share that our group be at ICLR 2024 with 1 paper in the Tiny Track! Get in touch with us to learn more about interpretable long-form question answering.
Revelio: Interpretable Long-Form Question Answering
by G. Moro, L. Ragazzi, L. Valgimigli, F. Vincenzi, and D. Freddi
The black-box architecture of pretrained language models (PLMs) hinders the interpretability of lengthy responses in long-form question answering (LFQA). Prior studies use knowledge graphs (KGs) to enhance output transparency, but mostly focus on non-generative or short-form QA. We present Revelio, a new layer that maps PLM's inner working onto a KG walk. Tests on two LFQA datasets show that Revelio supports PLM-generated answers with reasoning paths presented as rationales while retaining performance and time akin to their vanilla counterparts.