Multimodal detection of hateful memes by applying a vision-language pre-training model

HIGHLIGHTS

  • who: Yuyang Chen and Feng Pan from the Medical College, Huazhong University of Science and Technology, Wuhan, China have published the article: Multimodal detection of hateful memes by applying a vision-language pre-training model, in the Journal: PLOS ONE of 11/04/2022
  • what: This study has demonstrated that VL-PTMs with the addition of anchor points can improve the performance of deep learning-based detection of hateful memes by involving a more substantial alignment between the text caption and visual information. After training and testing the model , the authors compare its performance against . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?