Software Patent Abstract
This invention relates to a passive and interactive realtime image
recognition software method, particularly to a realtime image recognition
software method without the effects of the ambient light sources
and noises, which includes passive and interactive recognition methods.
Software Patent Claims
1. Whereas, the passive realtime image recognition method is described
as follows:Step 1: Capture an image projected by an image projection
apparatus to image areas as reference images (5.times.5 greylevel
value) by using a video camera;Step 2: Continuously capture realtime
images (5.times.5 greylevel value) projected by an image projection
apparatus to image areas by using a video camera, and check if any
foreign object touches the reactive area.The difference value between
the reference image from step 1 and the realtime image from step
2 can be denoted as follows (1):DIFF(x,y)=REF(x,y)NEW(x,y) (1)Step
3: Difference of the greylevel value of realtime image in step
2 and the greylevel value of reference image in step 1 to have
the greylevel distribution of remaining images, which means foreign
objects exist.Step 4: The image which is subject to differencing
through step 3 usually has noises, which can be present as in formula
(2) BIN ( x , y ) = { 255 DIFF ( x , y ) .gtoreq. T * 0 DIFF ( x
, y ) < T * ( 2 ) The binarization method eliminates the noises;
in which, T* represents a threshold, in 8 bit greyscale image and
the threshold ranges from 0 to 255. The optimal threshold can be
decided by a statistical method. The optimal threshold is on the
wave trough of the greylevel value; when T* is decided, the image
can be segmented into two sections. The requirement for the optimal
threshold T* is when the sum of variances in C.sub.1 and the variances
in C.sub.2 has the minimum value. It is assumed that the size of
the image is N=5.times.5, and the greylevel value number of 8 bit
greylevel image is I=256. Then the probability of greylevel value
is I can be denoted as: P ( i ) = n i N ( 3 ) Wherein n.sub.i indicates
the appearance number of greylevel value I, and the range of I
is 0.ltoreq.i.ltoreq.I1. According to the probability principle,
the following can be obtained: i = 0 I  1 P ( i ) = 1 ( 4 ) Suppose
the ratio of the pixel number in C.sub.1 is: W 1 = Pr ( C 1 ) =
i = 0 T * P ( i ) ( 5 ) While the ratio of the pixel number in C.sub.2
is: W 2 = Pr ( C 2 ) = i = T * + 1 I  1 P ( i ) ( 6 ) Here W.sub.1+W.sub.2=1
can be satisfied.The expect value of C.sub.1 can be calculated as:
U 1 = i = 0 T * P ( i ) W 1 .times. i ( 7 ) The expect value of
C.sub.2 is: U 2 = i = T * + 1 I  1 P ( i ) W 2 .times. i ( 8 )
The variance of C.sub.1 and C.sub.2 can be obtained by using the
formula (7) and (8). .sigma. 1 2 = i = 0 T * ( i  U 1 ) 2 P ( i
) W 1 ( 9 ) .sigma. 2 2 = i = T * + 1 I  1 ( i  U 2 ) 2 P ( i
) W 2 ( 10 ) The sum of variance in C.sub.1 and C.sub.2 are:.sigma..sub.w.sup.2=W.sub.1.sigma..sub.1.sup.2+W.sub.2.sigma..sub.2.s
up.2 (11)Substitute the value 0255 for formula (11). When the formula
(11) has the minimum value, then the optimal threshold T* can be
obtained.Step 5: Although the residual noises have been removed
through binarization in step 4, however, the moving object becomes
dilapidated. This can be removed by using four connected masks and
the inflation and erosion algorithm.The inflation algorithm is described
as follows: when M.sub.b(i,j)=255, set the mask of the 4neighbor
points asM.sub.b(i,j1)=M.sub.b(i,j+1)=M.sub.b(i1,i)=M.sub.b(i+1,j)=255
(12)The erosion algorithm is described as follows: when M.sub.b(i,j)=0,
set the mask of the 4 neighbor points asM.sub.b(i,j1)=M.sub.b(i,j+1)=M.sub.b(i1,j)=M.sub.b(i+1,j)=0
(13)Convoluting the abovementioned mask and binarized image can
eliminate the dilapidation.Step 6: Next, the lateral mask can be
used to obtain the contours of the moving object. Where, the Sobel
(the image contour operation mask) is used to obtain the object
contours.Convolute the Sobel (the image contour operation mask)
mask and the realtime image, which can be denoted by formula (14)
and (15):G.sub.x(x,y)=(NEW(x1,y+1)+2.times.NEW(x,y+1)+NEW(x+1,y+1))(NEW(x1
,y1)+2.times.NEW(x,y1)+NEW(x+1,y1)) (14)G.sub.y(i,j)=(NEW(x+1,y1)+2.times.NEW(x+1,y)+NEW(x+1,y+1))(NEW(x1,
y1)+2.times.NEW(x1,y)+NEW(x1,y+1))The rim of the acquired image
can be obtained by using formula (16).G(x,y)= {square root over
(G.sub.x(x,y).sup.2+G.sub.y(x,y).sup.2)}{square root over (G.sub.x(x,y).sup.2+G.sub.y(x,y).sup.2)}
(16)Then the above rim image is binarized. E ( x , y ) = { 255 G
( x , y ) .gtoreq. T e * 0 G ( x , y ) < T e * ( 17 ) Wherein
T.sub.e* represents the optimal threshold, the optimal threshold
can be obtained using the prior method; then, after mixing the binarization
contour pattern of the realtime image and the differentiated binary
image BIN(x,y), the periphery contour of the moving object can be
obtained.Step 7: Check if the contour point coordinates of the moving
object is touched by the reactive area and run the corresponding
movement.Step 8: Repeat all the steps above;
2. The said interactive realtime image recognition software method
is described as follows:Step 1: Capture the image projected to the
image region by an image projection apparatus as reference images
by using video camera;Step 2: Capture the realtime image continuously
projected by an image projection apparatus to the image region by
using a video camera, wherein images have active images. Then, check
if the reactive area is touched by any foreign object.The difference
value between reference images in step 1 and realtime images in
step 2 can be defined by the following formula (1):DIFF(x,y)=REF(x,y)NEW(x,y)
(1)Step 3: Difference of the greylevel values of said reference
image from step 1 with greylevel values of realtime images from
step 2 and get the remaining image, which is denoted by formula
(2) BIN ( x , y ) = { 255 DIFF ( x , y ) .gtoreq. T * 0 DIFF ( x
, y ) < T * ( 2 ) The binarization method removes the effect
of noises.Step 4: After binarization, the white segments refer to
the active images and within the images. The active images and can
be segmented by using the Line Segment Coding Method, said line
segment coding method is a line segment restore method to store
every bit of data in an object. Once the segmented images are detected
in line 1, it can be regarded as the first line of the first object
denoted as 11. Then, two lines are detected in the second line.
Since the first line is under 11 that is denoted as 12, the second
line is a new object denoted as 21. Accordingly, there is only
1 line under object 1 and object 2 in the forth line. Therefore,
the image originally regarded as two objects actually is an object,
which is denoted as 14. After all the images are scanned, then
the merge procedure is performed.Wherein, the information of every
object includes: square area, circumference, object characteristic,
segmented image size, width and the total number of the object.Step
5: when the active images and activity reactive area are segmented,
every object characteristic value is calculated. Seven unchanged
matrixes are used to represent the object characteristics. The solution
is described as follows:The (k+1) matrix definition of a binary
image b(m, n) is M k , l = m = 0 M  1 n = 0 N  1 m k n l b ( m
, n ) ( 18 ) Wherein, the center matrix is defined as: .mu. k ,
l = m = 0 M  1 n = 0 N  1 ( m  x _ ) k ( n  y _ ) l b ( m ,
n ) ( 19 ) Wherein, x _ = M 1 , 0 M 0 , 0 , y _ = M 0 , 1 M 0 ,
0 represents the mass center of the object respectively.Then the
normalized center matrix of the formula (19) is defined as follows:
.eta. k , l = .mu. k , l ( .mu. 0 , 0 ) k + l + 2 ( 20 ) The seven
unchanged matrixes can be obtained by the normalized second and
third order matrix: .phi. 1 = .eta. 2 , 0 + .eta. 0 , 2 .phi. 2
= ( .eta. 2 , 0  .eta. 0 , 2 ) 2 + 4 .eta. 1 , 1 2 .phi. 3 = (
.eta. 3 , 0  3 .eta. 1 , 2 ) 2 + ( 3 .eta. 2 , 1  .eta. 0 , 3
) 2 .phi. 4 = ( .eta. 3 , 0 + .eta. 1 , 2 ) 2 + ( .eta. 2 , 1 +
.eta. 0 , 3 ) 2 .phi. 5 = ( .eta. 3 , 0  3 .eta. 1 , 2 ) ( .eta.
3 , 0 + .eta. 1 , 2 ) [ ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  3 ( .eta.
2 , 1 + .eta. 0 , 3 ) 2 ] + ( 3 .eta. 2 , 1  .eta. 0 , 3 ) ( .eta.
2 , 1 + .eta. 0 , 3 ) [ 3 ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  ( .eta.
2 , 1 + .eta. 0 , 3 ) 2 ] .phi. 6 = ( .eta. 2 , 0  .eta. 0 , 2
) [ ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  ( .eta. 2 , 1 + .eta. 0 ,
3 ) 2 ] + 4 .eta. 1 , 1 ( .eta. 3 , 0 + .eta. 1 , 2 ) ( .eta. 2
, 1 + .eta. 0 , 3 ) .phi. 7 = ( 3 .eta. 2 , 1  .eta. 0 , 3 ) (
.eta. 3 , 0 + .eta. 1 , 2 ) [ ( .eta. 3 , 0 + .eta. 1 , 2 ) 2 
3 ( .eta. 2 , 1 + .eta. 0 , 3 ) 2 ] + ( 3 .eta. 1 , 2  .eta. 0
, 3 ) ( .eta. 2 , 1 + .eta. 0 , 3 ) [ 3 ( .eta. 3 , 0 + .eta. 1
, 2 ) 2  ( .eta. 2 , 1 + .mu. 0 , 3 ) 2 ] Step 6: In the realistic
pattern recognition process, the pattern of each category has different
characteristic vectors within a range, while the falling point within
the range cannot be predicted precisely even the range is known.
Such kind of random problem can be described using the probability
concept. Here, the Bayesian classifier of Gaussian pattern category
is adopted to recognize patterns to be identified in real time,
which can be described as: D j ( x ) =  1 2 ln C j  1 2 [ ( x
 m j ) T C j  1 ( x  m j ) ] , j = 1 , 2 .LAMBDA. M  ( 21 )
Wherein, D.sub.j is the j.sup.th pattern decision function; x=[.phi..sub.1A.phi..sub.7]
is the j.sup.th eigenvector; m.sub.j and C.sub.j is the j.sup.th
average eigenvector and covariance matrix. When D is the maximum,
it is classified as the j.sup.th pattern. After the pattern recognition
is completed, the position of the reactive area is decided. The
recognition process can be summarized as:(1). Practice the pattern
template in advance, calculate each category .phi..sub.1A.phi..sub.7,
and calculate m.sub.j and C.sub.j of each category, then the decision
rules of each categorizer are completed.(2). Segment the images
acquired by video camera 12 into several sub images through step
4, and then calculate each D.sub.j (x) of sub images.(3) Compare
the size of D.sub.j (x), identify the maximum, and set the pattern
as the k.sup.th category.After the recognition, the activity reactive
area can be located precisely.Step 7: Check if the activity reactive
area is touched by foreign objects and perform the corresponding
actions.Step 8: Repeat all the steps above;
3. This is the same abovementioned claim 2 of the said interactive
realtime image recognition software method, wherein said step 6:
if there are several reactive areas in the images, there are several
sub reference images. The passive recognition step 1 through 8 is
utilized to determine whether the foreign object touches the sub
reference images.
Software Patent Description
BACKGROUND OF THE INVENTION
[0001]1. Field of the Invention
[0002]This invention relates to a passive and interactive realtime
image recognition software method, particularly to a realtime image
recognition software method without the effects of the ambient light
sources and noises, which includes passive and interactive recognition
methods.
[0003]2. Description of the Related Art
[0004]According to the current realtime image recognition technology,
multimedia moving images are mostly projected by an LCD projector
(or other image display devices), and the obtained images are digitized
through a video camera and image capture interface.
[0005]By using related recognition technology, areas touched by
the human body can be detected and recognized and can be responded
to accordingly. A prior recognition technology discussed in U.S.
Pat. No. 5,534,917 applies to an AND operation to recognize patterns,
which primarily uses the pattern in the image area as a template
to store and then picks these images from a video camera for identification.
The identification processes are checked one by one. Although such
recognition method is simple and does not require a high operation
speed, it is subject to the influence of various background lights
and results in recognition errors. However, the hue saturation of
pattern templates previously stored in the memory is changed after
the projection. Furthermore, the system is mounted at different
occasions which make the background illuminant different. Therefore,
if the recognition technology is used, the color temperature and
chromatic aberration must be calibrated after the system has been
created. This process is very complicated.
[0006]Accordingly, to solve the abovementioned problems, the present
invention has been made to provide a recognition software method
without impacts from the change of ambient light sources and color
differences caused by images projected by an image projection apparatus,
wherein a greyscale video is used so that data transit becomes
less and the cost of hardware apparatus can be largely reduced.
[0007]The objects, features, structure, and principles of the present
invention will be more apparent from the following detailed descriptions.
SUMMARY OF THE INVENTION
[0008]The present invention relates to a passive and interactive
realtime image recognition software method, particularly to a realtime
image recognition software method without the influence of ambient
light sources and noises, which includes passive and interactive
recognition methods. Such method uses an image projection apparatus
to project the images for building a (8bits grey level) fixed background
image as a reference image, and continuously collects the realtime
images (8bits of greylevel value) and reference images from the
image area projected by the image projection apparatus using a video
camera to proceed operations such as image differentiation and binarization.
The activities of a moving object then can be identified quickly
and accurately to check if the reactive area of the projected image
is blocked. Then, the corresponding action will be performed accordingly.
[0009]Furthermore, since the present invention employs a greyscale
video to capture images, it is unnecessary to use a highend image
acquisition board or various high unitprice hardware as auxiliaries
and only a typical computer is needed to perform recognition accurately.
Therefore, the cost can be largely reduced. Accordingly, the realtime
image recognition software method of the present invention is provided
for a variety of applications such as multimedia interactive advertisements,
learning and instruction, games and video games.
BRIEF DESCRIPTION OF THE DRAWING
[0010]FIG. 1 is a schematic view showing the system architecture
of the passive and interactive realtime image recognition software
method in the present invention;
[0011]FIG. 2 is a diagram showing the reference image precaptured
by a video camera according to the passive and interactive realtime
image recognition software method in the present invention;
[0012]FIG. 3 is a diagram showing the realtime image captured
by a video camera according to the passive and interactive realtime
image recognition software method in the present invention;
[0013]FIG. 4 is a diagram showing the differentiation of the acquired
reference images and realtime images according to the passive and
interactive realtime image recognition software method in the present
invention;
[0014]FIG. 5 is a diagram showing the optimal threshold as the
greylevel value in the wave trough position according to the passive
and interactive realtime image recognition software method in the
present invention;
[0015]FIG. 6 is a diagram showing the areas between two optimal
thresholds according to the passive and interactive realtime image
recognition software method in the present invention;
[0016]FIG. 7 is a diagram showing the reference images and realtime
images being differentiated and then binarized according to the
passive and interactive realtime image recognition software method
in the present invention;
[0017]FIG. 8 is a diagram showing the four connected masks in the
passive and interactive realtime image recognition software method
of the present invention;
[0018]FIG. 9 is a diagram showing the Sobel mask (a) axis x and
(b) axis y in the passive and interactive realtime image recognition
software method of the present invention;
[0019]FIG. 10 is a diagram showing the interactive reference image
in the passive and interactive realtime image recognition software
method of the present invention;
[0020]FIG. 11 is a diagram showing the interactive realtime image
in the passive and interactive realtime image recognition software
method of the present invention;
[0021]FIG. 12 is a diagram showing the interactive reference image
and realtime image being differentiated and then binarized according
to the passive and interactive realtime image recognition software
method in the present invention;
[0022]FIG. 13 is a diagram showing the interactive objective line
segment coding section in the passive and interactive realtime
image recognition software method of the present invention;
[0023]FIG. 14 is a diagram showing the interactive activity image
and activity reactive area being segmented according to the passive
and interactive realtime image recognition software method in the
present invention;
[0024]FIG. 15 is a diagram showing the recognition results of the
interactive activity reactive area according to the passive and
interactive realtime image recognition software method in the present
invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025]FIG. 1 is a schematic view showing the system architecture
of the passive and interactive realtime image recognition software
method in the present invention. As shown in the figure, the method
includes a personal computer 10, an image projection apparatus 11,
image areas 11a, a video camera 12 and an image acquisition board
13.
[0026]The present invention is a passive and interactive realtime
image recognition software method, which can be divided into passive
and interactive method depending on the type of identification object.
The difference between passive and interactive is on the position
of said activity reactive area. In the passive identification module,
the position of activity reactive area is fixed; the interactive
one is the opposite, the activity reactive area varies in a range
on the projected image area projected by the image projection apparatus.
[0027]Further, the acquired images in the present invention are
all 8 bit grey level. The greylevel value ranges from 0 to 225.
[0028]Whereas, the passive realtime image recognition method is
described as follows: [0029]Step 1: Capture an image projected by
an image projection apparatus 11 to image areas 11a as reference
images (5.times.5 greylevel value) (referring to FIGS. 1 and 2)
by using a video camera 12; [0030]Step 2: Continuously capture realtime
images (5.times.5 greylevel value) projected by an image projection
apparatus 11 to image areas 11a referring to FIGS. 1 and 3) by using
a video camera 12, and check if any foreign object touches the reactive
area.
[0031]The difference value between the reference image from step
1 (referring to FIG. 2) and the realtime image from step 2 (referring
to FIG. 3) can be denoted as follows:
DIFF(x,y)=REF(x,y)NEW(x,y) (1) [0032]Step 3: Difference of the
greylevel value of realtime image in step 2 and the greylevel
value of reference image in step 1 to have the greylevel distribution
of remaining images (referring to FIG. 4, which means foreign objects
exist. [0033]Step 4: The image which is subject to differencing
through step 3 usually has noises, which can be present as in formula
(2)
[0033] BIN ( x , y ) = { 255 DIFF ( x , y ) .gtoreq. T * 0 DIFF
( x , y ) < T * ( 2 )
The binarization method eliminates the noises (referring to FIG.
7); in which, T* represents a threshold, in 8 bit greyscale image
and the threshold ranges from 0 to 255. The optimal threshold can
be decided by a statistical method. The optimal threshold is on
the wave trough of the greylevel value (referring to FIG. 5); when
T* is decided, the image can be segmented into two sections (referring
to FIG. 6). The requirement for the optimal threshold T* is when
the sum of variances in C.sub.1 and the variances in C.sub.2 has
the minimum value. It is assumed that the size of the image is N=5.times.5,
and the greylevel value number of 8 bit greylevel image is I=256.
Then the probability of greylevel value is I can be denoted as:
P ( i ) = n i N ( 3 )
Wherein n.sub.i indicates the appearance number of greylevel value
I, and the range of I is 0.ltoreq.i.ltoreq.I1. According to the
probability principle, the following can be obtained:
i = 0 I  1 P ( i ) = 1 ( 4 )
[0034]Suppose the ratio of the pixel number in C.sub.1 is:
W 1 = Pr ( C 1 ) = i = 0 T * P ( i ) ( 5 )
[0035]While the ratio of the pixel number in C.sub.2 is:
W 2 = Pr ( C 2 ) = i = T * + 1 I  1 P ( i ) ( 6 )
[0036]Here W.sub.1+W.sub.2=1 can be satisfied.
[0037]The expect value of C.sub.1 can be calculated as:
U 1 = i = 0 T * P ( i ) W 1 .times. i ( 7 )
[0038]The expect value of C.sub.2 is:
U 2 = i = T * + 1 I  1 P ( i ) W 2 .times. i ( 8 )
[0039]The variance of C.sub.1 and C.sub.2 can be obtained by using
the formula (7) and (8).
.sigma. 1 2 = i = 0 T * ( i  U 1 ) 2 P ( i ) W 1 ( 9 ) .sigma.
2 2 = i = T * + 1 I  1 ( i  U 2 ) 2 P ( i ) W 2 ( 10 )
[0040]The sum of variance in C.sub.1 and C.sub.2 are:
2.sup.2=W.sub.1.sigma..sub.1.sup.2+W.sub.2.sigma..sub.2.sup.2 (11)
Substitute the value 0.about.255 for formula (11). When the formula
(11) has the minimum value, then the optimal threshold T* can be
obtained.
[0041]Step 5: Although the residual noises have been removed through
binarization in step 4, however, the moving object becomes dilapidated.
This can be removed by using four connected masks (referring to
FIG. 8) and the inflation and erosion algorithm. The inflation algorithm
is described as follows: when M.sub.b(i,j)=255, set the mask of
the 4neighbor points as
[0041]M.sub.b(i,j1)=M.sub.b(i,j+1)=M.sub.b(i1,j)=M.sub.b(i+1,j)=255
(12)
[0042]The erosion algorithm is described as follows:
[0043]when M.sub.b(i,j)=0, set the mask of the 4 neighbor points
as
M.sub.b(i,j1)=M.sub.b(i,j+1)=M.sub.b(i1,j)=M.sub.b(i+1,j)=0 (13)
[0044]Convoluting the abovementioned mask and binarized image
can eliminate the dilapidation. [0045]Step 6: Next, the lateral
mask can be used to obtain the contours of the moving object. Where,
the Sobel (the image contour operation mask) (referring to FIG.
9) is used to obtain the object contours. Convolute the Sobel (the
image contour operation mask) mask and the realtime image, which
can be denoted by formula (14) and (15):
[0045]G.sub.x(x,y)=(NEW(x1,y+1)+2.times.NEW(x,y+1)+NEW(x+1,y+1))(NEW(x1
,y1)+2.times.NEW(x,y1)+NEW(x+1,y1)) (14)
G.sub.y(i,j)=(NEW(x+1,y1)+2.times.NEW(x+1,y)+NEW(x+1,y+1))(NEW(x1,y1)+
2.times.NEW(x1,y)+NEW(x1,y+1)) (15)
The rim of the acquired image can be obtained by using formula
(16).
[0046]G(x,y)= {square root over (G.sub.x(x,y).sup.2+G.sub.y(x,y).sup.2)}{square
root over (G.sub.x(x,y).sup.2+G.sub.y(x,y).sup.2)} (16)
Then the above rim image is binarized.
[0047] E ( x , y ) = { 255 G ( x , y ) .gtoreq. T e * 0 G ( x ,
y ) < T e * ( 17 )
[0048]Wherein T.sub.e* represents the optimal threshold, the optimal
threshold can be obtained using the prior method; then, after mixing
the binarization contour pattern of the realtime image and the
differentiated binary image BIN(x,y), the periphery contour of the
moving object can be obtained. [0049]Step 7: Check if the contour
point coordinates of the moving object is touched by the reactive
area and run the corresponding movement. [0050]Step 8: Repeat all
the steps above.
[0051]Other steps of the interactive realtime image recognition
software method are image differentiation, binarization, image segmentation,
reactive area pattern characteristic acquisition and reactive area
pattern recognition where reactive area pattern characteristic acquisition
is offline obtained in advance and the reactive area pattern recognition
uses the realtime process. Since the projected images in the reactive
area can be any shape and have rotation or shifting movement, the
pattern characteristic value cannot be influenced by rotating, shifting,
shrinking or magnifying. The pattern characteristic value adapted
here is the unchanged matrix of the pattern to be identified. It
will not be affected by any shifting, rotating and size change.
The said interactive realtime image recognition software method
is described as follows: [0052]Step 1: Capture the image projected
to the image region 11a by an image projection apparatus 11 as reference
images (referring to FIGS. 1 and 10) by using video camera 12; [0053]Step
2: Capture the realtime image (referring to FIG. 11) continuously
projected by an image projection apparatus 11 to the image region
11a by using a video camera 12, wherein images have active images
20. Then, check if the reactive area 21 is touched by any foreign
object.
[0054]The difference value between reference images in step I(referring
to FIG. 10) and realtime images in step 2 (referring to FIG. 11).
can be defined by the following formula:
DIFF(x,y)=REF(x,y)NEW(x,y) (1) [0055]Step 3: Difference of the
greylevel values of said reference image (referring to FIG. 10)
from step 1 with greylevel values of realtime images (referring
to FIG. 11) from step 2 and get the remaining image, which is denoted
by formula (2)
[0055] BIN ( x , y ) = { 255 DIFF ( x , y ) .gtoreq. T * 0 DIFF
( x , y ) < T * ( 2 )
The binarization method removes the effect of noises (Referring
to FIG. 12).
[0056]Step 4: After binarization, the white segments (referring
to FIG. 12) refer to the active images 20 and 21 within the images.
The active images 20 and 21 can be segmented by using the Line Segment
Coding Method (referring to FIG. 14), said line segment coding method
(referring to FIG. 13) is a line segment restore method to store
every bit of data in an object. Once the segmented images are detected
in line 1, it can be regarded as the first line of the first object
denoted as 11. Then, two lines are detected in the second line.
Since the first line is under 11 that is denoted as 12, the second
line is a new object denoted as 21. Accordingly, there is only
1 line under object 1 and object 2 in the forth line. Therefore,
the image originally regarded as two objects actually is an object,
which is denoted as 14. After all the images are scanned, then
the merge procedure is performed.
[0057]Wherein, the information of every object includes: square
area, circumference, object characteristic, segmented image size,
width and the total number of the object. [0058]Step 5: When the
active images 20 and activity reactive area 21 are segmented, every
object characteristic value is calculated. Seven unchanged matrixes
are used to represent the object characteristics. The solution is
described as follows:
[0059]The (k+1) matrix definition of a binary image b(m,n) is
M k , l = m = 0 M  1 n = 0 N  1 m k n l b ( m , n ) ( 18 )
[0060]Wherein, the center matrix is defined as:
.mu. k , l = m = 0 M  1 n = 0 N  1 ( m  x _ ) k ( n  y _ )
l b ( m , n ) ( 19 )
[0061]Wherein,
x _ = M 1 , 0 M 0 , 0 , y _ = M 0 , 1 M 0 , 0
represents the mass center of the object respectively.
[0062]Then the normalized center matrix of the formula (19) is
defined as follows:
.eta. k , l = .mu. k , l ( .mu. 0 , 0 ) k + l + 2 ( 20 )
[0063]The seven unchanged matrixes can be obtained by the normalized
second and third order matrix:
.phi. 1 = .eta. 2 , 0 + .eta. 0 , 2 .phi. 2 = ( .eta. 2 , 0  .eta.
0 , 2 ) 2 + 4 .eta. 1 , 1 2 .phi. 3 = ( .eta. 3 , 0  3 .eta. 1
, 2 ) 2 + ( 3 .eta. 2 , 1  .eta. 0 , 3 ) 2 .phi. 4 = ( .eta. 3
, 0 + .eta. 1 , 2 ) 2 + ( .eta. 2 , 1 + .eta. 0 , 3 ) 2 .phi. 5
= ( .eta. 3 , 0  3 .eta. 1 , 2 ) ( .eta. 3 , 0 + .eta. 1 , 2 )
[ ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  3 ( .eta. 2 , 1 + .eta. 0 ,
3 ) 2 ] + ( 3 .eta. 2 , 1  .eta. 0 , 3 ) ( .eta. 2.1 + .eta. 0
, 3 ) [ 3 ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  ( .eta. 2 , 1 + .eta.
0 , 3 ) 2 ] .phi. 6 = ( .eta. 2 , 0  .eta. 0 , 2 ) [ ( .eta. 3
, 0 + .eta. 1 , 2 ) 2  ( .eta. 2 , 1 + .eta. 0 , 3 ) 2 ] + 4 .eta.
1 , 1 ( .eta. 3 , 0 + .eta. 1 , 2 ) ( .eta. 2 , 1 + .eta. 0 , 3
) .phi. 7 = ( 3 .eta. 2 , 1  .eta. 0 , 3 ) ( .eta. 3 , 0 + .eta.
1 , 2 ) [ ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  3 ( .eta. 2 , 1 + .eta.
0 , 3 ) 2 ] + ( 3 .eta. 1 , 2  .eta. 0 , 3 ) ( .eta. 2 , 1 + .eta.
0 , 3 ) [ 3 ( .eta. 3 , 0 + .eta. 1 , 2 ) 2  ( .eta. 2 , 1 + .mu.
0 , 3 ) 2 ] [0064]Step 6: In the realistic pattern recognition process,
the pattern of each category has different characteristic vectors
within a range, while the falling point within the range cannot
be predicted precisely even the range is known. Such kind of random
problem can be described using the probability concept. Here, the
Bayesian classifier of Gaussian pattern category is adopted to recognize
patterns to be identified in real time, which can be described as:
[0064] D j ( x ) =  1 2 ln C j  1 2 [ ( x  m j ) T C j  1 (
x  m j ) ] , j = 1 , 2 .LAMBDA. M  ( 21 )
[0065]Wherein, D.sub.j is the j.sup.th pattern decision function;
x=[.phi..sub.1A.phi..sub.7] is the jth eigenvector; m.sub.j and
C.sub.j is the j.sup.th average eigenvector and covariance matrix.
When D is the maximum, it is classified as the j.sup.th pattern.
After the pattern recognition is completed, the position of the
reactive area is decided. If there are several reactive areas 21
in the images, there are several sub reference images. The passive
recognition step 1 through 8 are utilized to determine whether the
foreign object touches the sub reference images. The recognition
process can be summarized as: [0066](1). Practice the pattern template
in advance, calculate each category .phi..sub.1A.phi..sub.7, and
calculate m.sub.j and C.sub.j of each category, then the decision
rules of each categorizer are completed. [0067](2). Segment the
images acquired by video camera 12 into several sub images through
step 4, and then calculate each D.sub.j (x) of sub images. [0068](3).
Compare the size of D.sub.j (x), identify the maximum, and set the
pattern as the k.sup.th category. [0069]After the recognition, the
activity reactive area 21 can be located precisely (referring to
FIG. 15). [0070]Step 7: Check if the activity reactive area 21 is
touched by foreign objects and perform the corresponding actions.
[0071]Step 8: Repeat all the steps above.
