Universal linear least squares prediction: Upper and lower bounds

Andrew C. Singer*, Suleyman S. Kozat, Meir Feder

*Corresponding author for this work

Research output: Contribution to journalLetterpeer-review

37 Scopus citations

Abstract

We consider the problem of sequential linear prediction of real-valued sequences under the square-error loss function. For this problem, a prediction algorithm has been demonstrated [1]-[3] whose accumulated squared prediction error, for every bounded sequence, is asymptotically as small as the best fixed linear predictor for that sequence, taken from the class of all linear predictors of a given order p. The redundancy, or excess prediction error above that of the best predictor for that sequence, is upper-bounded by A 2 p ln(n)/n, where n is the data length and the sequence is assumed to be bounded by some A. In this correspondence, we provide an alternative proof of this result by connecting it with universal probability assignment. We then show that this predictor is optimal in a min-max sense, by deriving a corresponding lower bound, such that no sequential predictor can ever do better than a redundancy of A 2p ln(n)/n.

Original languageEnglish
Pages (from-to)2354-2362
Number of pages9
JournalIEEE Transactions on Information Theory
Volume48
Issue number8
DOIs
StatePublished - Aug 2002

Funding

FundersFunder number
National Science FoundationCCR-0092598, CCR 99-79381, ITR 00-85929
Office of Naval ResearchN000140110117

    Keywords

    • Min-max
    • Prediction
    • Sequential probability assignment
    • Universal algorithms

    Fingerprint

    Dive into the research topics of 'Universal linear least squares prediction: Upper and lower bounds'. Together they form a unique fingerprint.

    Cite this