Abstract
Solving inverse problems with iterative algorithms is popular, especially for large data. Due to time constraints, the number of possible iterations is usually limited, potentially affecting the achievable accuracy. Given an error one is willing to tolerate, an important question is whether it is possible to modify the original iterations to obtain faster convergence to a minimizer achieving the allowed error without increasing the computational cost of each iteration considerably. Relying on recent recovery techniques developed for settings in which the desired signal belongs to some low-dimensional set, we show that using a coarse estimate of this set may lead to faster convergence at the cost of an additional reconstruction error related to the accuracy of the set approximation. Our theory ties to recent advances in sparse recovery, compressed sensing, and deep learning. Particularly, it may provide a possible explanation to the successful approximation of the 1 -minimization solution by neural networks with layers representing iterations, as practiced in the learned iterative shrinkage-thresholding algorithm.
Original language | English |
---|---|
Pages (from-to) | 1676-1690 |
Number of pages | 15 |
Journal | IEEE Transactions on Signal Processing |
Volume | 66 |
Issue number | 7 |
DOIs | |
State | Published - 1 Apr 2018 |
Keywords
- Approximate computing
- Approximation methods.
- Compressed sensing
- Convergence of numerical methods
- Gradient methods
- Iterative methods
- Neural networks
- Optimization
- Training