TY - GEN

T1 - Permutation Invariant Individual Batch Learning

AU - Fogel, Yaniv

AU - Feder, Meir

N1 - Publisher Copyright:
© 2023 IEEE.

PY - 2023

Y1 - 2023

N2 - This paper considers the individual batch learning problem. Batch learning (in contrast to online) refers to the case where there is a "batch"of training data and the goal is to predict a test outcome. Individual learning refers to the case where the data (training and test) is arbitrary, individual. This batch individual setting poses a fundamental issue of defining a plausible criterion for a universal learner since in each experiment there is a single test sample. We propose a permutation invariant criterion that, intuitively, lets the individual training sequence manifest its empirical structure for predicting the test sample. This criterion is essentially a min-max regret, where the regret is based on a leave-one-out approach, minimized over the universal learner and maximized over the outcome sequences (thus agnostic). To show its plausibility, we analyze the criterion and its resulting learner for two cases: Binary Bernoulli and 1-D deterministic barrier. For both cases the regret behaves as O(c/N), N the size of the training and c = 1 for the Bernoulli case and log4 for the 1-D barrier. Interestingly, in the Bernoulli case, the regret in the stochastic setting behaves as O(1/2N) while here, in the individual setting, it has a larger constant.

AB - This paper considers the individual batch learning problem. Batch learning (in contrast to online) refers to the case where there is a "batch"of training data and the goal is to predict a test outcome. Individual learning refers to the case where the data (training and test) is arbitrary, individual. This batch individual setting poses a fundamental issue of defining a plausible criterion for a universal learner since in each experiment there is a single test sample. We propose a permutation invariant criterion that, intuitively, lets the individual training sequence manifest its empirical structure for predicting the test sample. This criterion is essentially a min-max regret, where the regret is based on a leave-one-out approach, minimized over the universal learner and maximized over the outcome sequences (thus agnostic). To show its plausibility, we analyze the criterion and its resulting learner for two cases: Binary Bernoulli and 1-D deterministic barrier. For both cases the regret behaves as O(c/N), N the size of the training and c = 1 for the Bernoulli case and log4 for the 1-D barrier. Interestingly, in the Bernoulli case, the regret in the stochastic setting behaves as O(1/2N) while here, in the individual setting, it has a larger constant.

UR - http://www.scopus.com/inward/record.url?scp=85165095373&partnerID=8YFLogxK

U2 - 10.1109/ITW55543.2023.10161673

DO - 10.1109/ITW55543.2023.10161673

M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???

AN - SCOPUS:85165095373

T3 - 2023 IEEE Information Theory Workshop, ITW 2023

SP - 142

EP - 146

BT - 2023 IEEE Information Theory Workshop, ITW 2023

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 2023 IEEE Information Theory Workshop, ITW 2023

Y2 - 23 April 2023 through 28 April 2023

ER -