When a numerical computation fails to fit in the primary memory of a serial or parallel computer, a so-called "out-of-core" algorithm, which moves data between primary and secondary memories, must be used. In this paper, we study out-of-core algorithms for sparse linear relaxation problems in which each iteration of the algorithm updates the state of every vertex in a graph with a linear combination of the states of its neighbors. We give a general method that can save substantially on the I/O traffic for many problems. For example, our technique allows a computer with M words of primary memory to perform T=Ω(M1/5) cycles of a multigrid algorithm for a two-dimensional elliptic solver over an n-point domain using only Θ(nT/M1/5) I/O transfers, as compared with the naive algorithm which requires Ω(nT) I/O's. Our method depends on the existence of a "blocking" cover of the graph that underlies the linear relaxation. A blocking cover has the property that the subgraphs forming the cover have large diameters once a small number of vertices have been removed. The key idea in our method is to introduce a variable for each removed vertex for each time step of the algorithm. We maintain linear dependences among the removed vertices, thereby allowing each subgraph to be iteratively relaxed without external communication. We give a general theorem relating blocking covers to I/O-efficient relaxation schemes. We also give an automatic method for finding blocking covers for certain classes of graphs, including planar graphs and d-dimensional simplicial graphs with constant aspect ratio (i.e., graphs that arise from dividing d-space into "well-shaped" polyhedra). As a result, we can perform T iterations of linear relaxation on any n-vertex planar graph using only Θ(n+nT√lg n/M1/4) I/O's or on any n-node d-dimensional simplicial graph with constant aspect ratio using only Θ(n+nT√lg n/MΩ1/d) I/O's.