We present an off-line variant of the mistake-bound model of learning. Just like in the well studied on-fine model, a learner in the offline model has to learn an unknown concept from a sequence of elements of the instance space on which he makes "guess and test" trials. In both models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. We give a combinatorial characterization of the number of mistakes in the off-line model. We apply this characterization to solve several natural questions that arise for the new model. First, we compare the mistake bounds of an off-line learner to those of a learner learning the same concept classes in the on-line scenario. We show that the number of mistakes in the on-fine learning is at most a log n factor more than the off-line learning, where n is the length of the sequence. In addition, we show that ff there is an off-line algorithm that does not make more than a constant number of mistakes for each sequence then there is an online algorithm that also does not make more than a constant number of mistakes. The second issue we address is the effect of the ordering of the elements on the number of mistakes of an off-fine learner. It turns out that there are sequences on which an off-fine learner can guarantee at most one mistake, yet a permutation of the same sequence forces him to err on many elements. We prove, however, that the gap, between the off-fine mistake bounds on permutations of the same sequence of n-many elements, cannot be larger than a multiplicative factor of log n and we present examples that obtain such a gap.