Sentence-level complexity in russian: an evaluation of bert and graph neural networks

HIGHLIGHTS

SUMMARY

    A common issue is the interpretability of deep neural_network results. An evaluation of BERT and graph neural_networks. The evaluation involves Russian BERT, Transformer, SVM with features from sentence embeddings, and a graph neural_network. And discussion: Pre-trained language models outperform graph neural_networks, that incorporate the syntactical dependency tree of a sentence. The graph neural_networks perform better than Transformer and SVM classifiers that employ sentence embeddings. Predictions of the proposed graph neural_network architecture can be easily explained. Keyword sentence-level complexity, BERT, graph neural_networks, sentence embeddings, text complexity, Russian language One of the state-of . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?