Descripción
The field of eXplainable Artificial Intelligence (XAI) aims to dissect robust “blackbox” algorithms such as Convolutional Neural Networks (CNNs) that are known for making human-level prediction performance. However, the ability to explain and interpret these prediction algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. Text classification using Recurrent Neural Networks (RNNs) is a fundamental language task in Natural Language Processing. Similar problems can also appear in sequential models for text documents as well which are also capable of making good predictions, yet there is a lack of connection between language semantics and prediction results. In view of the above needs, this book investigates the hidden representation form of the deep neural networks in image signaling and computer vision. In short, this paper proposes an interaction-based methodology–Influence score (I-score)—to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity.