Google AI presents “StylEx”: a new approach for a visual explanation of classifiers

Neural networks are capable of performing a wide range of tasks. Understanding how they arrive at their decisions, on the other hand, is often a mystery that remains unexplored. Explaining the decision-making process of a neural model could have significant social impact in areas where human oversight is crucial, such as medical image processing and autonomous driving. These revelations could be instrumental in advising health practitioners and possibly improving scientific breakthroughs.

For a visual explanation of classifiers, there are approaches such as attention maps. They show which parts of an image have an impact on the classification. They are, however, unable to explain how the attributes within these areas affect the conclusion of the classification. Other methods demonstrate by gracefully passing the image from one class to the next. They still fail to isolate the various influencing factors.

Google AI researchers unveil a new technique for visually explaining classifiers called StylEx. StylEx finds and visualizes untangled attributes that automatically affect a classifier. As a result, it is possible to study the impact of various traits by modifying them separately. Changes to one characteristic do not influence other attributes, which is a significant advantage of this strategy. Animals, foliage, faces and retinal images are just some of the areas where StylEx can be used. StylEx finds qualities that match well with semantic qualities and generates meaningful image-specific explanations, according to research.

How does StylEX work with a classifier and an input image?

To generate high quality images, the StyleGAN2 architecture is used. It takes place in two stages:

Phase 1: StylEx training

“StyleSpace” is an untangled latent space in StyleGAN2. The individual semantically significant properties of the images in the training dataset are stored in this area. StyleGAN training, on the other hand, is not dependent on the classifier as it may not represent the essential features that determine the classifier. To satisfy the classifier, the researchers chose to train a StyleGAN type generator. This encourages StyleSpace to support classifier-specific functionality in addition to solving the problem. This is accomplished by adding two additional components to the StyleGAN generator:

With a loss of reconstruction, they are formed together. Hence, the created output resembles the input image in appearance. For the same reason, the generator can be used on any image as input. However, the visual similarity of the image is not up to par, as it may miss small optical features that are crucial for a particular classifier. The researchers modified the StyleGAN training to include a classification loss component. This additional component forces the classifier probability of the output image to be the same as its classifier probability. Therefore, the resulting image contains small visual features that are crucial for the classifier.

Phase 2: Now the disentangled attributes should be extracted

After the training phase, the StyleSpace is combed through for attributes that have a substantial impact on the classifier. Each StyleSpace coordinate is modified and its effect on the probability of categorization is measured to complete the task. The best qualities for a given image are chosen to maximize the change in classification probability. By repeating the method over large collections of data, attributes specific to the top-K class can be determined.

Both binary and multi-class classifiers can be used using the method described above. Key attributes found by the Google AI algorithm conform to consistent semantic designs when interpreted by humans across all tested domains, as confirmed by human rating.

The way to go:

Finally, the researchers’ strategy allows meaningful explanations for a given classifier on a specific image or class. According to Google’s AI Principles, the researchers believe their technique is a promising step towards detecting and mitigating previously undisclosed biases in classifiers and datasets. Furthermore, a focus on multiple attribute-based explanation is essential to provide new insights into previously opaque classification methods and to aid in the process of scientific discovery.

Article: https://arxiv.org/pdf/2104.13369.pdf

Project: https://explaining-in-style.github.io/

Github: https://github.com/google/explaining-in-style

Reference: https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html

Comments are closed.