signature recognition using neural network .pdf

September 17, 2017 | Author: Bhupendra Chaudhari | Category: Areas Of Computer Science, Applied Mathematics, Mathematics, Science, Computing And Information Technology
Share Embed Donate


Short Description

this is international journal paper describes signature recognition tech using neural network & different feature e...

Description

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

ISSN: 2319-4413

Signature Recognition & Verification System Using Back Propagation Neural Network Nilesh Y. Choudhary, GF’S GCOE, Jalgaon, India Mrs. Rupal Patil, GF’S GCOE, Jalgaon, India Dr. Umesh. Bhadade, GF’S GCOE, Jalgaon, India Prof. Bhupendra M Chaudhari, Govt. Polytechnics Nadurbar, India

ABSTRACT The fact that the signature is widely used as a means of personal identification tool for humans require that the need for an automatic verification system. Verifwication can be performed either Offline or Online based on the application. However human signatures can be handled as an image and recognized using computer vision and neural network techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are various approaches to signature recognition with a lot of scope of research. In this paper, off-line signature recognition & verification using back propagation neural network is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified based on features extracted from the signature using Invariant Central Moment and Modified Zernike moment for its invariant feature extraction because the signatures are Hampered by the large amount of variation in size, translation and rotation and shearing parameter. Before extracting the features, preprocessing of a scanned image is necessary to isolate the signature part and to remove any spurious noise present. The system is initially trained using a database of 56 persons signatures obtained from those 56 individuals whose signatures have to be authenticated by the system. For each subject a mean signature is obtained integrating the above features derived from a set of his/her genuine sample signatures .This signature recognition& verification system is designed using MATLAB. This work has been tested and found suitable for its purpose.

INTRODUCTION Handwritten signature is one of the most widely accepted personal attributes for identity verification of the person. The written signature is regarded as the primary means of identifying the signer of a written document based on the implicit assumption that a person’s normal signature changes slowly and is very difficult to erase, alter or forge without detection. The handwritten signature is one of the ways to authorize transactions and authenticate the human identity compared with other electronic identification methods such as fingerprints scanning and retinal vascular

i-Xplore International Research Journal Consortium

pattern screening. It is easier for people to migrate from using the popular pen-and-paper signature to one where the handwritten signature is captured and verified electronically. There are two main streams in the signature recognition task. First approach requires finding information and can recognize signature as the output of the system and it is seen that in a certain time interval, it is necessary to make the signature. This system models the signing person and other approach is to take a signature as a static two-dimensional image which does not contain any time-related information [1].in short, signature recognition can be divided into two groups. Online and offline. The online signature recognition, where signatures are acquired during the writing process with a special instrument, such as pen tablet. In fact, there is always dynamic information available in case of online signature recognition, such as velocity, acceleration and pen pressure. So far there have been many widely employed methods developed for online signature recognition for example, Artificial Neural Networks (ANN)[2,3], dynamic time warping (DTW)[4,5], the hidden Markov models (HMM)[6,7]. The off-line recognition just deals with signature images acquired by a scanner or a digital camera. In general, offline signature recognition& verification is a challenging problem. Unlike the on-line signature, where dynamic aspects of the signing action are captured directly as the handwriting trajectory, the dynamic information contained in off-line signature is highly degraded. Handwriting features, such as the handwriting order, writing-speed variation, and skillfulness, need to be recovered from the grey-level pixels. In the last few decades, many approaches have been developed in the pattern recognition area, which approached the offline signature verification problem. Justino,[8] propose an off-line signature verification system using Hidden Markov Model . Zhang, Fu and Yan [9] proposed handwritten signature verification system based on Neural ‘Gas’ based Vector

www.irjcjournals.org

1

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

Quantization. Vélez, Sánchez and Moreno [10] propose a robust off-line signature verification system using compression networks and positional cuttings. [11, 12, 13] The signature recognition & verification system shown in Fig 1 is broadly divided into three subparts 1) Preprocessing, 2) Feature extraction,3) Recognition & Verification.

1. SIGNATURE DATABASE For training and testing of the signature recognition and verification system 675 signatures are used. The signatures were taken from 56 persons. The templates of the signature as shown in Fig 2 For training the system 56 person’s signatures are used. Each of these persons signed 8 original signature and

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

The input signature is captured from the scanner or digital high pixel camera which provides the output image in term of BMP Colour image. The preprocessing algorithm provides the required data suitable for the final processing. In the feature extraction phase the invariant central moment and Zernike moment are used to extract the feature for the classification purpose. In classification the Back propagation Neural Network is used to provide high accuracy and less computational complexity in training and testing phase of the system.

signed 4 forgery signatures in the training set the total number of signatures is 675 (12 x 56) are used. In order to make the system robust, signers were asked to use as much as variation in their signature size and shape and the signatures are collected at different times without seeing other signatures they signed before. For testing the system, another 112 genuine signatures and 112 forgery signatures are taken from the same 56 persons in the training set.

www.irjcjournals.org

2

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

ISSN: 2319-4413

3

2.1 Converting Color Image to Gray Scale Image In today’s technology, almost all image capturing and Scanning devices gives their output in color format. A color image consists of a coordinate matrix and three color matrices. Coordinate matrix contains X, Y coordinate values of the image. The color matrices are labeled as red (R), green (G), and blue (B). The technique presented in this study are based on grey scale images, therefore, scanned or captured color images are initially converted to grey scale using the following equation (1) Gray color = 0.299*Red + 0.5876*Green +0.114*Blue (1)

Fig 3.Scanned Image

Fig 4.Colour to Gray Scale Image

2.2 Noise Reduction Fig 2.Signature Templates

2. PREPROCESSING Preprocessing algorithms is nothing but data conditioning algorithm which provide data for feature extraction process. It establishes the link between real world data and recognition & verification system. The preprocessing of the trajectory of input signature pattern directly facilitates pattern description and affects the quality of description. Any image-processing application suffers from noise like touching line segments, isolated pixels and smeared images. This noise may cause severe distortions in the digital image and hence result in ambiguous features and a correspondingly poor recognition and verification rate. The preprocessing step is applied both in training and testing phases. Background elimination, noise reduction, width normalization and skeletonization are the sub steps

Noise reduction (also called “smoothing” or “noise filtering”) is one of the most important processes in image processing. Images are often corrupted due to positive and negative impulses stemming from decoding errors or noisy channels. An image may also be degraded because of the undesirable effects due to illumination and other objects in the environment. Median filter is widely used for smoothing and restoring images corrupted by noise. It is a non-linear process useful especially in reducing impulsive or saltand-pepper type noise. In a median filter, a window slides over the image, and for each positioning of the window, the median intensity of the pixels inside it determines the intensity of the pixel located in the middle of the window. Different from linear filters such as the mean filter, median filter has attractive properties for suppressing impulse noise while preserving edges. Median Filter is used in this study due to its edge preserving feature [14,15, 16, 17].

Fig 5.Noise Removal

i-Xplore International Research Journal Consortium

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

2.3 Background Elimination and Border Clearing In Many image processing algorithms require the separation of objects from the image background. Thresholding is the most easily & sophistically applicable method for this purpose. It is widely used in image segmentation [18, 19]. Thresholding is choosing a threshold value H and assigning 0 to the pixels with values smaller than or equal to H and 1 to those with values greater than H. We used Thresholding technique for separating the signature pixels from the background pixels. Clearly, in this application, we are interested in dark objects on a light background, and therefore, a threshold value H, called the brightness threshold, is appropriately chosen and applied to image pixels f(x, y) as in the following Equation (2) If f(x, y) ≥ H then f(x, y) = Background else f(x, y) = Object (2) Signature image which is located by separating it from complex background image is converted into binary image white background taking the pixel value of 1. Vertical and horizontal (histogram) projections are used for border clearing. For both direction, vertical and horizontal, we counted every row zeros and the resulting histogram is plotted sideways.

2.4 Signature Normalization

ISSN: 2319-4413

In these equations: , - pixel coordinates for the normalized signature, , - pixel coordinates for the original signature, M- one of the dimensions (width or height) for the normalized signature

Fig 6. Normalized Image

3. FEATURE EXTRACTION Feature extraction, as defined by Devijver and Kittle [20] is “Extracting the information from the raw data which is most relevant for classification stage. This data can be minimized within-class pattern variation and increases the inter-class variations.” Therefore, achieving a high recognition performance in signature recognition system is highly influenced by the selection of efficient feature extraction methods, taking into consideration the domain of the application and the type of classifier used [21]. An efficient feature extraction algorithm should require two characteristics: Invariance and reconstruct-ability Features [21] that are invariant to certain transformations on the signature which would be able to recognize many variations of these signatures. Such transformations include translation, scaling, rotation, stretching, skewing and mirroring.

Signature dimensions may vary due to the irregularities in the image scanning and capturing process. Furthermore, height and width of signatures vary from person to person and, sometimes, even the same person may use different size signatures. First, we need to eliminate the size differences and obtain a standard signature size for all signatures. After this normalization process, all signatures will have the same dimensions. In this study, we used a normalized size of 50x50 pixels for all signatures that will be processed further. During the normalization process, the aspect ratio between width and height of a signature is kept intact.

On the other hand, the ability to reconstruct signature from their extracted features ensures that complete information about the signature shape is present in these features. In this feature extraction step, the well known feature set in pattern recognition is used. one is depends on invariant central moment designed by Hu’s [22] which is used for scale and translation normalization and other is modified Zernike moment[23] which is used for rotation normalization.

Normalization process made use of the following equation (3) & (4).

The moments of order (u + v) of an image composed of binary pixels B(x, y) are proposed by [24], [25] as shown in eq. (5).

3.1 Invariant Central Moment

(3) (5) The body’s area A and the image’s center of i s found from eq. 6. mass (4) (6)

i-Xplore International Research Journal Consortium

www.irjcjournals.org

4

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

ISSN: 2319-4413

The central moments, which are translation Invariant, are given by eq. 7. (12) Where (7) Finally, the normalized central moments, which are translation and scale invariant, are derived from the central moments as shown in eq. 8.

(8) Where K=1+ (u+ v)/2 for u+v≥2

3.2 Zernike Moments Zernike polynomials are a set of complex polynomials which form a complete orthogonal set over the interior of the unit circle [26].The form of polynomial is shown by eq. 10.

(10) Where is the Length of the vector from the origin to the point (x, y), θ is the angle between this vector and the x axis in the Counterclockwise direction and the radial is polynomial

The orthogonality property of Zernike moments, as expressed in the eq.8, allows easy image reconstruction from its Zernike moments by simply adding the information content of each individual order moment. Moreover, Zernike moments have simple rotational transformation properties interestingly enough the Zernike moments of a rotated image, have identical magnitudes to those of the original one, where they merely acquire a phase shift upon rotation. Therefore, the magnitudes of the Zernike moments are rotation invariant features of the underlying image. Translation and scale-invariance, on the other hand, are obtained by shifting and scaling the image into the unit circle.

Fig 7. Rotation Normalization

4. BACK PROPAGATION ARTIFICIAL NEURAL NETWORK (10) Zernike moments are the projections of the image function onto these orthogonal basis functions. The Zernike moment of order n with repetition m for a digital image is given by

(11) Where, * is the complex conjugate operator and x2+y2≤1. To calculate the Zernike moments for a given image, its pixels are mapped to the unit circle x2+y2≤1. This is done by taking the geometrical center of the image as the origin and then scaling its bounding rectangle into the unit circle, as shown in Figure 7. Due to the orthogonality of the Zernike basis, the part of the original image inside the unit circle can be approximated using its Zernike moments Anm up to a given order nmax using

i-Xplore International Research Journal Consortium

There are several algorithms that can be used to create an artificial neural network, but the Back propagation [27] was chosen because it is probably the easiest to implement, while preserving efficiency of the network. Backward Propagation Artificial Neural Network (ANN) use more than one input layers (usually 3). Each of these layers must be either of the following: • Input Layer – This layer holds the input for the network • Output Layer – This layer holds the output data, usually an identifier for the input. • Hidden Layer – This layer comes between the input layer and the output layer. They serve as a propagation point for sending data from the previous layer to the next layer. A typical Back Propagation ANN is as depicted in Fig 8 The black nodes (on the extreme left) are the initial inputs. Training such a network involves two phases. In the first phase, the inputs are propagated forward to compute the outputs for each output node. Then, each of these outputs is subtracted from its desired output, causing an error [an error for each output node].

www.irjcjournals.org

5

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

In the second phase, each of these output errors is passed backward and the weights are fixed. These two phases are continued until the sum of square of output errors reaches an acceptable value. Each neuron is composed of two units. The First unit adds products of weights coefficients and input signals while the second unit realizes nonlinear function, called neuron activation function. Signal is adder output signal and = is output signal of nonlinear element. Signal y is also output signal of neuron. To teach the neural network, we need data set. The training data set consists 2 assigned with corresponding of input signals 1 target (desired output). The network training is an iterative process. In each iteration weights coefficients of nodes are modified using new data from training data set. Each teaching step starts with forcing both input signals from training set. After this stage we can determine output signals values for each neuron in each network layer Symbols represent weights of connections and input of neuron in between output of neuron the next layer. In the next algorithm step, the output signal of the network is compared with the desired output value (the target), which is found in training data set. The of output layer difference is called error signal neuron. It is impossible to compute error signal for internal neurons directly, because output values of these neurons are unknown. For many years the effective method for training multilayer networks has been unknown. Only in the middle eighties the back propagation algorithm has been worked out. The idea is to propagate error signal (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron. The weights' coefficient used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the other). This technique is used for all network layers. If propagated errors came from few neurons they are added. The illustration is below

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

Fig 8. A 3-layer neural network using back propagation

When the application launches, it waits for the user to determine whether he wishes to train or verify a set of signatures. At the training stage, based on the back propagation neural network algorithm, the user gives eight 12 different images as input, of which the real input to the network, are the individual pixels of the images. When input is confirmed and accepted, it passes through the back propagation neural network algorithm to generate an output which contains the network data of the trained images. The back propagation artificial neural network simply calculates the gradient of error of the network regarding the networks modifiable weights. In this paper we a multilayer neural network designed by O.C Abikoye [28],

5. TRAINING AND TESTING The recognition phase consists of two parts, training and testing respectively which is accomplished by back propagation neural network. As explained in Section 1. 672 images in our database belonging to 56 people are used for both training and testing. Since 8 (out of 12) input vectors for each image were used for training purposes, there are only 224 (56*4) input vectors (data sets) left to be used for the test set. Under normal (correct) operation of the back propagation neural network, only one output is expected to take a value of “1” indicating the recognition of a signature represented by that particular output. The other output values must remain zero. The output layer used a logic decoder which mapped neuron outputs between 0.5-1 to a binary value of 1. If the real value of an output is less than 0.5, it is represented by a “0” value. The back propagation neural network program recognized all of the 56 signatures correctly. This result translates into a 100% recognition rate. We also tested the system with 15 random signatures which are not contained in the original database.

www.irjcjournals.org

6

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR) Volume 2, No. 1, January 2013

Only two of these signatures which are very similar to at least one of the 56 stored images resulted in “false positives” (output > 0.5) while the remaining 8 are recognized correctly as not belonging to the original set (the output value was
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF