A K*-sparse vector x* ∈ ℝN produces measurements via linear dimensionality reduction as u = Φx* + n, where Φ ∈ ℝM×N (M < N), and n ∈ ℝM consists of independent and identically distributed, zero mean Gaussian entries with variance σ2. An algorithm, after its execution, determines a vector x̂ that has K-nonzero entries, and satisfies ∥u - Φx̂∥ ≤ ∈. How far can x̂ be from x*? When the measurement matrix Φ provides stable embedding to 2K-sparse signals (the so-called restricted isometry property), they must be very close. This paper therefore establishes worst-case bounds to characterize the distance ∥x̂ - x*∥ based on the online meta-information. These bounds improve the pre-run algorithmic recovery guarantees, and are quite useful in exploring various data error and solution sparsity trade-offs. We also evaluate the performance of some sparse recovery algorithms in the context of our bound.