The problem of digital image recognition
can be regarded both as a problem of
testing multiple hypotheses and as a position
measurement problem. In the first
case, a criterion of algorithm efficiency can
be the probability of a correct decision,
while in the second one it should be the
measurement accuracy [1].
Automatic image recognition can result
in so-called anomalous errors when the
maximum of the algorithm response (which
we shall call the decision function, or DF)
appears at a location entirely different
from the true position. Usually, the
response field has a peak at approximately
the correct place and a number of other,
smaller peaks located at random. Under
strong image distortions, one of these false
peaks can become larger than the main
peak, leading to an anomalous error. On
the other hand, the maximum of the main
peak may be somewhat shifted from the
correct match position. This deviation,
usually small, will be referred to as a
normal error.
|