An algorithm for detecting orientation in texture is developed and compared to results of humans detecting orientation in the same textures. The algorithm is based on the steerable filters of Freeman and Adelson (1991), orientation-selective filters derived from derivatives of Gaussians. The filters are applied over multiple scales and their outputs nonlinearly contrast normalized. The human data was collected from forty subjects who were asked to identify ``the minimum number of dominant orientations'' they perceived, and the ``strength'' with which they perceived each orientation. Test data consisted of 111 grey-level images of natural textures taken from the Brodatz album, a standard collection used in computer vision and image processing. Results show that the computer and humans chose at least one of the same dominant orientations on 95 of the natural textures. Of these textures, 74 were also in 100% agreement on the location of all the dominant orientations chosen by both human and computer. Disagreements are analyzed and possible causes are discussed. Some apparent limitations in the current filter shapes and sizes are illustrated, as well as some (surprisingly small) effects believed to be caused by semantic recognition and gestalt grouping.