One of the major problems in medical imaging is the shortage of pathology data. In most cases, the acquisition of labeled data is expensive and usually involves manual labeling by a skilled medical expert. Because of this, most medical imaging tasks suffer from a severe class imbalance with a bias towards non-pathological classes, resulting in reduced performance. The recent growth in the use of generative adversarial networks and their ability to generate synthetic data shows great promise for reducing the class imbalance problem. In this work we introduce the GC-CycleGAN model, a general method for CycleGAN factorization, utilizing Grad-CAMs as auxiliary data in the CycleGAN model to generate synthetic images. Our novel approach utilizes Grad-CAMs ability to describe class activation and uses it for improved network classification, rather than as a visualization tool. The spread of the COVID-19 pandemic is affecting the lives of millions worldwide. If proven effective, automated COVID-19 detection from chest X-ray images can be a supportive step in the fight against COVID-19. However, the task of COVID-19 classification suffers greatly from the class imbalance problem. Using the GC-CycleGAN method, we demonstrate in this work the ability to balance a heavily imbalanced dataset for the task of COVID-19 vs. non-COVID-19 pneumonia X-ray classification. We show improved results over two baselines and the COVID-Net model.