TY - JOUR
T1 - Noise prediction for channels with side information at the transmitter
AU - Erez, Uri
AU - Zamir, Ram
PY - 2000
Y1 - 2000
N2 - The computation of channel capacity with side information at the transmitter side (but not at the receiver side) requires, in general, extension of the input alphabet to a space of strategies, and is often hard. We consider the special case of a discrete memoryless modulo-additive noise channel Y = X + Zs, where the encoder observes causally the random state S G S that governs the distribution of the noise ZS. We show that the capacity of this channel is given by C = log |X| - mint:S→X H(ZS - t(S)). This capacity is realized by a state-independent code, followed by a shift by the "noise prediction" tmin(S) that minimizes the entropy of Zs -t(S). If the set of conditional noise distributions {p( z \s), s ∈ S} is such that the optimum predictor tmin(·) is independent of the state weights, then C is also the capacity for a noncausal encoder, that observes the entire state sequence in advance. Furthermore, for this case we also derive a simple formula for the capacity when the state process has memory.
AB - The computation of channel capacity with side information at the transmitter side (but not at the receiver side) requires, in general, extension of the input alphabet to a space of strategies, and is often hard. We consider the special case of a discrete memoryless modulo-additive noise channel Y = X + Zs, where the encoder observes causally the random state S G S that governs the distribution of the noise ZS. We show that the capacity of this channel is given by C = log |X| - mint:S→X H(ZS - t(S)). This capacity is realized by a state-independent code, followed by a shift by the "noise prediction" tmin(S) that minimizes the entropy of Zs -t(S). If the set of conditional noise distributions {p( z \s), s ∈ S} is such that the optimum predictor tmin(·) is independent of the state weights, then C is also the capacity for a noncausal encoder, that observes the entire state sequence in advance. Furthermore, for this case we also derive a simple formula for the capacity when the state process has memory.
KW - Optimum transmitter
KW - Prediction with minimum error entropy
KW - Side information
KW - Time-varying channels
UR - http://www.scopus.com/inward/record.url?scp=0005755812&partnerID=8YFLogxK
U2 - 10.1109/18.850704
DO - 10.1109/18.850704
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:0005755812
SN - 0018-9448
VL - 46
SP - 1610
EP - 1617
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 4
ER -