In the field of hyper-spectral sensing, sensors capture images at hundreds and even thousands of wavelengths. These hyper-spectral images, which are composed of hyper-pixels, offer extensive intensity information which can be utilized to obtain segmentation results which are superior to those that are obtained using RGB images. However, straightforward application of segmentation is impractical due to the large number of wavelength images, noisy wavelengths and inter-wavelength correlations. Accordingly, in order to efficiently segment the image, each pixel needs to be represented by a small number of features which capture the structure of the image. In this paper we propose the diffusion bases dimensionality reduction algorithm (Schclar and Averbuch, 2015) to derive the features which are needed for the segmentation. We also propose a simple algorithm for the segmentation of the dimensionality reduced image. We demonstrate the proposed framework when applied to hyper-spectral microscopic images and hyper-spectral images obtained from an airborne hyper-spectral camera.