Explanation of Siamese Neural Networks for Weakly Supervised Learning

Authors

  • Lev Utkin Peter the Great Saint Petersburg Polytechnic University (SPbPU), Saint Petersburg, Russia
  • Maxim Kovalev Peter the Great Saint Petersburg Polytechnic University (SPbPU), Saint Petersburg, Russia
  • Ernest Kasimov Peter the Great Saint Petersburg Polytechnic University (SPbPU), Saint Petersburg, Russia

DOI:

https://doi.org/10.31577/cai_2020_6_1172

Keywords:

Interpretable model, explainable AI, Siamese neural network, embedding, autoencoder, perturbation technique

Abstract

A new method for explaining the Siamese neural network (SNN) as a black-box model for weakly supervised learning is proposed under condition that the output of every subnetwork of the SNN is a vector which is accessible. The main problem of the explanation is that the perturbation technique cannot be used directly for input instances because only their semantic similarity or dissimilarity is known. Moreover, there is no an "inverse" map between the SNN output vector and the corresponding input instance. Therefore, a special autoencoder is proposed, which takes into account the proximity of its hidden representation and the SNN outputs. Its pre-trained decoder part as well as the encoder are used to reconstruct original instances from the SNN perturbed output vectors. The important features of the explained instances are determined by averaging the corresponding changes of the reconstructed instances. Numerical experiments with synthetic data and with the well-known dataset MNIST illustrate the proposed method.

Downloads

Download data is not yet available.

Downloads

Published

2021-05-20

How to Cite

Utkin, L., Kovalev, M., & Kasimov, E. (2021). Explanation of Siamese Neural Networks for Weakly Supervised Learning. COMPUTING AND INFORMATICS, 39(6), 1172–1202. https://doi.org/10.31577/cai_2020_6_1172