In this paper we consider the problem of on-line learning with respect to the logarithmic loss, where the learner provides a probability assignment for the next label given the past and current data samples and the past labels. We consider the problem in the individual and the stochastic settings. Our first result is a class of new universal on-line probability assignment schemes based on the mixture approach. Now, in classical learning, it is well known that there are model classes that can be learned in batch, but cannot be learned sequentially for all data samples sequences. We show that for these model classes the proposed mixture schemes lead to a vanishing regret in the individual setting when the adversary is somewhat constrained. In the stochastic setting we show that any on-line solution for the log-loss may be used to obtain a solution for a wide variety of loss functions.