ODIN is a popular Out-Of-Distribution (OOD) detection algorithm. It is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. Instead of passively making this observation, we derive a new loss, termed Gradient Quotient (GQ) loss, that encourages this behaviour by the network. GQ can be used either to train a classification network from scratch, or fine-tune it. We show theoretically why GQ encourages the observation made by ODIN and evaluate GQ on a number of network architectures and datasets. Experiments show that we achieve SOTA on a large number of standard benchmarks.