HIGHLIGHTS
- who: Kush Attal from the (UNIVERSITY) have published the article: A dataset for plain language adaptation of biomedical abstracts, in the Journal: (JOURNAL)
- what: The authors report BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L (which measures the longest shared sub-sequence between a candidate and reference), BERTScore-F1, and SARI. Instead of training the model on a single task, T5 is pre-trained on a vast amount of data and on many unsupervised and supervised objectives, including token and span masking, classification, reading comprehension, translation, and summarization. While training occurs with the training dataset . . .
If you want to have access to all the content you need to log in!
Thanks :)
If you don't have an account, you can create one here.