Hyper-spectral cameras capture images at hundreds and even thousands of wavelengths. These hyper-spectral images offer orders of magnitude more intensity information than RGB images. This information can be utilized to obtain segmentation results which are superior to those that are obtained using RGB images. However, many of the wavelengths are correlated and many others are noisy. Consequently, the hyper-spectral data must be preprocessed prior to the application of any segmentation algorithm. Such preprocessing must remove the noise and inter-wavelength correlations and due to complexity constraints represent each pixel by a small number of features which capture the structure of the image. The contribution of this paper is three-fold. First, we utilize the diffusion bases dimensionality reduction algorithm (Schclar and Averbuch in Diffusion bases dimensionality reduction, pp. 151–156, ) to derive the features which are needed for the segmentation. Second, we describe a faster version of the diffusion bases algorithm which uses symmetric matrices. Third, we propose a simple algorithm for the segmentation of the dimensionality reduced image. Successful application of the algorithms to hyper-spectral microscopic images and remote-sensed hyper-spectral images demonstrate the effectiveness of the proposed algorithms.