Adapting vs. pre-training language models for historical languages

HIGHLIGHTS

  • who: Enrique, Manjavacas and Leiden, from the University, The Netherlands have published the research: Adapting vs. Pre-Training Language Models for Historical Languages, in the Journal: (JOURNAL)
  • what: The authors show this on the basis of a variety of downstream tasks ranging from established tasks such as Part-of-Speech Tagging Named Entity Recognition and Word Sense Disambiguation to ad-hoc tasks like Sentence Periodization which are specifically designed to test historically relevant text processing. The authors investigate which of the two strategies is bound to produce higher quality vector representations: (i) pre-training from . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?