What it is: A Generative Adversarial Network (GAN) is a neural network that, after being trained to understand the structure of a given data set, can generate new, realistic examples of it. For example, a GAN trained on photographs of human faces can generate realistic photographic portraits of people that don’t exist. These networks are called ‘adversarial’ because their training takes the form of a competition between two neural networks.
What it does: Making a GAN involves training two neural networks, a generator and a discriminator, until the point at which they can train each other in opposition. In the example of a portrait-making GAN, the generator would be trained to make portraits of imaginary people. Meanwhile, the discriminator would be trained to tell the difference between the images generated by its ‘opponent’ network and the real thing. Having completed this initial training, the networks then face off, with the generator trying to fool the discriminator repeatedly, using any failures to improve itself. Trained in this fashion, the generator algorithm will, theoretically, eventually create realistic fakes on demand.
Why it matters: GANs, though developed only recently, can already do a number of impressive things. GANs can create high-resolution photos of faces from blurry ones, or generate street schematics from aerial photos of urban environments, or create plausible paintings of objects from pencil sketches. Positive applications include enhancing the utility of security footage, detecting malware more effectively, and creating powerful artistic tools. Nefarious applications include automatic malware generation, as well as the production of fake images, video, and audio of real people, so-called Deepfakes.
What to do about it: For most people, GANs represent a scientific curiosity at this point. However, enterprises whose applications perform functions that might be accomplished by GANs–image processing, for example–may need to prepare for significant disruption alongside opportunity.
- Drafting tools allowing architects and designers to generate 3D models quickly from 2D sketches
- Artistic tools based on the random creation of new plausible images
- Enhanced security, using CCTV footage to generate 3-dimensional police sketches of potential perpetrators
- Better and faster image-to-text and speech-to-text applications
- More advanced malware detection
Given that GANs can already generate nearly plausible fake images and video of real people, there are warranted concerns that GANs could be used to spread new kinds of misinformation and propaganda. For example, the opponents of a given political figure could use a GAN to generate fake audio files of that figure saying outrageous things. However, there are plans to create counterfeit-spotting GANs; in the near future, it may be necessary to use neural nets to establish the legitimacy of images or video. Multiple American states are proceeding with legislation designed to stop the spread of such forgeries, and some laws are already in place.
Case Study: Deepfake Pornography
The first broadly consequential application of GANs was, unfortunately, their use in creating Deepfake pornography: photographs or videos imitating real people committing sexual acts. A spate of such videos was met with media and legal backlash. As of now, Facebook has banned malicious Deepfakes, and Google blocks them from appearing in search results. However, in practice, malicious Deepfakes may propagate for short periods of time in mainstream channels, or indefinitely in other channels.
Case Study: Security
As with other ML technologies, GANs may trigger an arms race in security, specifically in the field of malware creation and detection. Security researchers have successfully trained GANs to generate malware that will go undetected by a given piece of security software. On the other hand, others have proposed using GANs to make discriminator networks that can supplement existing malware detection methods.