The number of images and videos available for search on the Internet is in the order of a trillion images, making current brute force search techniques prohibitively inefficient on such large scales. As society continues to increase our desire and ability to search these vast collections of data, improving upon traditional face recognition search techniques becomes an important problem to address. Because face recognition (and other biometric) algorithms are only commercially available as black box systems, any indexing scheme developed to perform efficient search must operate without access to the underlying feature vectors used to measure facial similarity. To address this restriction, we propose a structured search that separates the facial feature space into clusters derived from sets of prototype subjects we refer to as 'synecdoches'. After an offline training step, our proposed method assigns each gallery image to a cluster in the face space based on its similarity to a set of synecdoche clusters. In turn, query images are compared to the target gallery images based on the closest synecdoche cluster in sequence. Our results show a minimal drop in accuracy when only considering half of the clusters, thus reducing the search space in half. Additional experiments demonstrate the viability of our proposed approach to improve search efficiency amidst the common restriction of a black box matcher.