Swin-mfa: a multi-modal fusion attention network based on swin-transformer for low-light image human segmentation

HIGHLIGHTS

  • who: Xunpeng Yi et al. from the Electronic Information School, Wuhan University, Wuhan, China have published the article: Swin-MFA: A Multi-Modal Fusion Attention Network Based on Swin-Transformer for Low-Light Image Human Segmentation, in the Journal: Sensors 2022, 22, x FOR PEER REVIEW of /2022/
  • what: In Section 4.1, the authors compare Swin-MFA with various feature-fusion methods, and the experiment proves that the feature-fusion attention block performs better than other traditional methods. In Section 4.3, the authors compare the methods with classic image segmentation methods, such as . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?