Explaining deep convolutional models by measuring the influence of interpretable features in image classification

HIGHLIGHTS

  • who: Explainable artificial intelligence Explainability et al. from the Department of Control and Computer Engineering, Politecnico di Torino, Turin, Italy have published the research work: Explaining deep convolutional models by measuring the influence of interpretable features in image classification, in the Journal: (JOURNAL)
  • what: This work proposes EBAnO an innovative explanation framework able to analyze the decision-making process of DCNNs in image classification by providing prediction-local and class-based model-wise explanations through the unsupervised of contained in multiple layers. In the prediction process, the model outcomes are considered as a collaborative contribution . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?