The Geometric Hashing paradigm for model-based recognition of objects in cluttered scenes is discussed. This paradigm enables a unified approach to rigid object recognition under different viewing transformation assumptions both for 2-D and 3-D objects obtained by different sensors, e.g. vision, range, tactile. It is based on an intensive off-line model preprocessing (learning) stage, where model information is indexed into a hash-table using minimal, transformation invariant features. This enables the on-line recognition algorithm to be particularly efficient. The algorithm is straightforwardly parallelizable. Initial experimentation of the technique has led to successful recognition of both 2-D and 3-D objects in cluttered scenes from an arbitrary viewpoint. We, also, compare the Geometric Hashing with the Hough Transform and the alignment techniques. Extensions of the basic paradigm which reduce its worst case recognition complexity are discussed.