Researchers from Nvidia reported a new Artificial Intelligence model that has the ability to decrease the amount of data needed to train GAN. Generative adversarial networks or GAN is a machine learning framework, which can capture as well as copy variations contained within a dataset. The more the data provided, the better the resulting model (else the process faces overfitting). However, the new model can decrease the material required by a typical GAN by nearly 20 times. Thus, this is useful in situations where there is a shortage of data and time.
The new model lets Artificial Intelligence learn even when it is served with significantly lower training data
Usually, training a traditional high-quality GAN requires around 1 lakh images. If there is a lack of data, then it often runs into an issue called overfitting. In the case of the new research, the latest model enables GAN to achieve the same level of accuracy albeit with the significantly smaller data set provided. This technique is known as ADA or Adaptive Discriminator Augmentation. ADA can prove to be extremely beneficial in applications where labeled training material is rare. For example, it is tough to train an algorithm that can detect an unusual neurological brain disorder. But with an ADA-trained GAN, this problem can be tackled easily.