Primary segmentation of visual scenes is based on spatiotemporal edges that are presumably detected by neurons throughout the visual system. In contrast, the way in which the auditory system decomposes complex auditory scenes is substantially less clear. There is diverse physiological and psychophysical evidence for the sensitivity of the auditory system to amplitude transients, which can be considered as a partial analogue to visual spatiotemporal edges. However, there is currently no theoretical framework in which these phenomena can be associated or related to the perceptual task of auditory source segregation. We propose a neural model for an auditory temporal edge detector, whose underlying principles are similar to classical visual edge detector models. Our main result is that this model reproduces published physiological responses to amplitude transients collected at multiple levels of the auditory pathways using a variety of experimental procedures. Moreover, the model successfully predicts physiological responses to a new set of amplitude transients, collected in cat primary auditory cortex and medial geniculate body. Additionally, the model reproduces several published psychoacoustical responses to amplitude transients as well as the psychoacoustical data for amplitude edge detection reported here for the first time. These results support the hypothesis that the response of auditory neurons to amplitude transients is the correlate of psychoacoustical edge detection.