From GANs in Action by Jakub Langr and Vladimir Bok
In this article, we’ll:
- Explore innovative practical applications of GANs that use multiple techniques.
- Namely, we’ll look at applications in medicine (to see how GANs can be used to augment a small dataset to improve classification accuracy).
As captivating as generating handwritten digits and turning apples into oranges may be, GANs can be used for a lot more. In this article, we’ll explore some of the practical uses employing GANs. One of the main goals of this article is to give you the knowledge and tools necessary to not only understand what has been accomplished with GANs to-date, but also to empower you to find new applications of your own choosing — and there’s no better place to start that journey than taking a look at several successful examples.
Progressively Growing GANs can be used to create not only photo-realistic renditions of human faces, but also samples of arguably greater practical importance: medical mammograms. The CycleGAN can be used to create realistic simulated virtual environments by translating clips from a video game into movie-like scenes, which can then be used to train self-driving cars.
In this two-part article, we’ll review GAN applications in a greater detail. We’ll walk through what motivated these applications, what makes them uniquely suited to benefit from the advances made possible by GANs, and how their creators went about implementing them. Specifically, in part 1 we’ll look at GAN applications in medicine and part two looks at GANs in fashion. We chose these two fields based on the following criteria:
- They showcase not only academic but also, and primarily, business value of GANs. They represent how the academic advances achieved by GAN researchers can be applied to solve real-world problems.
- They use GAN models which are understandable with the tools and techniques discussed in this article. Instead of introducing new concepts, we’ll look at how the models we implemented can be applied to uses other than MNIST.
- They’re understandable without the need for specialized domain expertise. For example, GAN applications in chemistry and physics tend to be hard to comprehend for anyone without strong background in the given field.
In addition, the chosen fields and the examples we selected serve to illustrate the versatility of GANs. In medicine, we’ll see how GANs can be used in situations with limited data. In fashion, we’ll look at the other extreme and see how GANs can be useful in scenarios where extensive datasets are available. Even if you’ve no interest in medicine or fashion, the tools and approaches about which we’ll learn in this article are applicable to countless other use-cases.
Sadly, as is all-too-often the case, the practical applications we’ll review are impossible to reproduce in a coding tutorial due to the proprietary or otherwise hard-to-obtain nature of the training data. Instead of a full coding tutorial we can only provide a detailed explanation of the GAN models and the implementation choices behind them. Accordingly, by the end of this article, you should be fully equipped to implement any of the applications in this article by making only small adjustments to the GAN models and feed them a dataset for the given use-case or one similar to it. Without further ado, let’s dive in.
GANs in Medicine
In this section, we’ll look at applications of GANs in medicine. Namely, how to use GAN-produced synthetic data to enlarge a training dataset in order to improve diagnostic accuracy.
Using GANs to Improve Diagnostic Accuracy
Machine learning applications in medicine face a range of challenges that lend the field well to benefiting from GANs. Perhaps most importantly, it’s often challenging to obtain training datasets large enough for supervised machine learning algorithms due to difficulties involved in collecting medical data. It’s often prohibitively expensive and impractical to obtain samples of medical conditions. Unlike datasets of handwritten letters for OCR or footage of roads for self-driving cars which anyone can obtain, examples of medical conditions are harder to come by and they often require specialized equipment to collect — not to mention important considerations of patient privacy which limit how medical data can be collected and used. In addition to difficulties in collecting medical datasets, it’s also challenging to properly label this data, a process that often requires people with expert knowledge on the given condition to provide annotations. As a result, many medical applications have been unable to benefit from the advances in deep learning and AI.
Many techniques have been developed to help address the issue of small labeled datasets. GANs can be used to enhance the performance of classification algorithms in semi-supervised setting. This only addresses half of the problem medical researchers face. Semi-supervised learning helps in situations in which we have a large dataset, but only a small portion of it’s labeled. Indeed, in many medical applications, the fact that only a small portion of the data is labeled is only part of the problem — this small portion is often the only data we have! We don’t have the luxury of thousands of other samples from the same domain only waiting to be labeled or used in a semi-supervised setting.
Medical researchers strive to overcome the challenge of insufficient datasets by using data augmentation techniques. For images, these include small tweaks and transformations such as scaling (zooming in and out), translations (moving left/right and up/down), and rotations. These strategies allow a single example to be used to create many others, which expands the dataset size. Figure 1 shows examples of data augmentations commonly used in computer vision.
As you may imagine, standard data augmentation has many limitations. For one, small modifications yield examples that don’t diverge far from the original image. As a result, the additional examples don’t add much variety to help the algorithm learn to generalize. In the case of handwritten digits, for example, we’d want to see the number “6” rendered in a different writing styles, not only permutations of the same underlying image; in the case of the medical diagnostics, we’re looking at different examples of the same underlying pathology. Enriching the dataset with synthetic examples, such as those produced by a GAN, has the potential to further enrich the dataset beyond traditional augmentation techniques. This is precisely what the Israeli researchers Maayan Frid-Adar, Eyal Klang, Michal Amitai, Jacob Goldberger, and Hayit Greenspan set out to investigate.
Encouraged by GAN’s ability to synthesize high-quality images in virtually any domain, Frid-Adar and his colleagues decided to explore the use of GANs for medical data augmentation. They chose to focus on improving the classification of liver lesions. One of their primary motivations for focusing on liver’s that this organ’s one of the three most common sites for metastatic cancer, with over 745,000 deaths caused by liver cancer in 2012 alone. Accordingly, tools and machine learning models which help doctors diagnose at-risk patients, have a potential to save lives, and improve outcomes for countless patients.
Frid-Adar and his team found themselves in a catch-22 situation: Their goal was to train a GAN to augment a small dataset; GANs need a lot of data to train. They wanted to use GANs to create a large dataset but they needed a large dataset to train the GAN in the first place. Their solution was ingenious. First, they used standard data augmentation techniques to create a larger dataset. Second, they used this dataset to train a GAN to create synthetic examples. Third, they used the augmented dataset from step 1 along with the GAN-produced synthetic examples from step 2 to train a liver lesion classifier.
The GAN model the researchers used was a variation on the Deep Convolutional GAN (DCGAN). Attesting to the applicability of GANs across a wide array of datasets and scenarios, Frid-Adar et al. had to make only minor tweaks and customizations to make DCGAN work for their use-case. As evidenced by Figure 2, the only parts of the model that needed adjustment are the dimensions of the hidden layers and the dimensions of the output from the Generator / input to the Discriminator network. Instead of 28x28x1-sized images like those in the MNIST dataset, this GAN deals with images which are 64x64x1. As noted in their paper, Frid-Adar et al. also used 5×5 convolutional kernels — but then again, this is only a small change to the network hyperparameters. With the exception of the image size, which is given by the training data, all these adjustments were in all likelihood determined by trial and error. The researchers kept tweaking the parameters until the model produced satisfactory images.
Before we review how well the approach devised by Frid-Adar and his team worked, let’s pause for a moment and appreciate how far we got.
Compared to the baseline (using standard data augmentation only), Frid-Adar and his team achieved an over 7% improvement in classification accuracy. Their results are summarized in Figure 3, which shows the classification accuracy (y-axis) as the number of training examples increases (x-axis). The red line depicts classification performance for classic data augmentation. The performance improves as the number of new, augmented training examples are added; the improvement plateaued around an accuracy of 80% beyond which additional examples failed to yield improvement.
The blue line shows the additional increase in accuracy achieved by augmenting the dataset using GAN-produced synthetic examples. Starting from the point beyond which additional classically-augmented examples stopped improving accuracy, Frid-Adar et al. added synthetic data produced by GAN. The classification performance improved from around 80% to over 85%, demonstrating the usefulness of GANs.
Improved classification of liver lesions is only one of many data-constrained applications in medicine that can benefit from data augmentation through GAN-produced synthetic examples. For example, a team of British researchers led by Christopher Bowles from the Imperial College London harnessed GANs (in particular, Progressively Growing GANs) to boost performance on brain segmentation tasks. Crucially, an improved performance can unlock a model’s usability in practice, particularly in fields like medicine where accuracy may mean the difference between life and death.
- In this article, we saw firsthand the versatility of GANs and demonstrated (a) that GANs can be harnessed for a wide array of non-academic applications; and (b) how easily the GAN variants can be repurposed to use-cases beyond MNIST.
- In medicine, we saw how GAN-produced synthetic examples which improved classification accuracy beyond what is possible with standard dataset augmentation strategies.
In fashion, we saw how GANs can be used to create new items and alter existing items to better match someone’s personal style. This was accomplished by generating images that maximize preference score provided by a recommendation algorithm.
Let’s now move on to applications of GANs in a field with much lower stakes and a whole different set of considerations challenges: fashion; which is coming up in Part 2.
This article was originally posted here: https://freecontent.manning.com/practical-applications-of-gans-part-1/
 J. Ferlay et al., “Cancer incidence and mortality worldwide: sources, methods and major patterns in globocan 2012,” International Journal of Cancer, vol. 136, no. 5, 2015. Quoted in Frid-Adar et al. (arXiv:1801.02385)
 Christopher Bowles, Liang Chen, Ricardo Guerrero, Paul Bentley, Roger Gunn, Alexander Hammers, David Alexander Dickie, Maria Valdés Hernández, Joanna Wardlaw: “GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks”, 2018; arXiv:1810.10863.