Evaluating attribution in dialogue systems: the begin benchmark

HIGHLIGHTS

  • who: Nouha Dziri and collaborators from the (UNIVERSITY) have published the research: Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark, in the Journal: (JOURNAL)
  • what: The authors develop a benchmark that can be used to assess attribution in knowledge-based dialog systems; following Rashkin et_al (2021a), the authors define an attributable response1 as one connected to textual evidence that supports the entirety of the response. The aim of this benchmark is to determine to what extent current evaluation metrics fulfill their purpose. The authors report the performance of automatic metrics on the BEGIN test set . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?