Visual and non-visual data are often related through complex, indirect links, thus making the prediction of one from the other difficult. Examples include the partially-understood connections between firing of V1 neurons and visual stimuli, the coupling between recorded speech and video of the corresponding lip movements, and the attempts to infer criminal intentions from surveillance videos. In this study, we explore the exploitation of the visual/non-visual relation between genetic sequences and visual appearance. This exploitation is currently considered infeasible due to the many hidden variables and unknown factors involved, the considerable variability and noise that exist in images and the high-dimensionality of the data. Despite the difficulties, we show convincing evidence that the application of correlations between genotype and visual phenotype for identification is feasible with current technologies. To this end, we employ sensitive forced-matching tests, that can accurately detect correlations between data sets. These tests are used to compare the performance of several existing algorithms, as well as novel ones that we have designed for the task.