## Abstract

The computation of channel capacity with side information at the transmitter side (but not at the receiver side) requires, in general, extension of the input alphabet to a space of strategies, and is often hard. We consider the special case of a discrete memoryless modulo-additive noise channel Y = X + Zs, where the encoder observes causally the random state S G S that governs the distribution of the noise Z_{S}. We show that the capacity of this channel is given by C = log |X| - min_{t:S→X} H(Z_{S} - t(S)). This capacity is realized by a state-independent code, followed by a shift by the "noise prediction" t_{min}(S) that minimizes the entropy of Z_{s} -t(S). If the set of conditional noise distributions {p( z \s), s ∈ S} is such that the optimum predictor t_{min}(·) is independent of the state weights, then C is also the capacity for a noncausal encoder, that observes the entire state sequence in advance. Furthermore, for this case we also derive a simple formula for the capacity when the state process has memory.

Original language | English |
---|---|

Pages (from-to) | 1610-1617 |

Number of pages | 8 |

Journal | IEEE Transactions on Information Theory |

Volume | 46 |

Issue number | 4 |

DOIs | |

State | Published - 2000 |

## Keywords

- Optimum transmitter
- Prediction with minimum error entropy
- Side information
- Time-varying channels