Discriminative linear models are a popular tool in machine learning. These can be generally divided into two types: linear classifiers, such as support vector machines (SVMs), which are well studied and provide state-of-the-art results, and probabilistic models such as logistic regression. One shortcoming of SVMs is that their output (known as the "margin") is not calibrated, so that it is difficult to incorporate such models as components of larger systems. This problem is solved in the probabilistic approach. We combine these two approaches above by constructing a model which is both linear in the model parameters and probabilistic, thus allowing maximum margin training with calibrated outputs. Our model assumes that classes correspond to linear sub-spaces (rather than to half spaces), a view which is closely related to concepts in quantum detection theory. The corresponding optimization problems are semidefinite programs which can be solved efficiently. We illustrate the performance of our algorithm on real world datasets, and show that it outperforms second-order kernel methods.