Correlation-based subset evaluation of feature selection for dynamic Malaysian sign language
Sign language is used for communication to the deaf and speech impaired. For communication between the common man and the deaf, sign language interpreter is needed for understanding natural language and vice versa. Sign Language Recognition (SLR) aims to translate sign language into text so that the...
Summary: | Sign language is used for communication to the deaf and speech impaired. For communication between the common man and the deaf, sign language interpreter is needed for understanding natural language and vice versa. Sign Language Recognition (SLR) aims to translate sign language into text so that the communication between the deaf and the general public can be done comfortably. Research in Sign Language Recognition (SLR) has been widely done by researchers from many various countries using different datasets. In the existing work of Sign Language Recognition, most researchers divide the process in four main steps: image acquisition, pre-processing, features extraction and classification. The success for the classification process is determined by many factors. One factor is the quality of the data or information held. The process of data model extraction will be more difficult if the information held is irrelevant or contains redundancies, or if the data obtained contains high noise. Thus by adding processes before classification methods such as feature selection methods can provide better data input in the classification process, it is expected to improve the performance of the method of classification. Feature selection potential is used in SLR. Currently, there is no research work that used Feature Selection on Sign Language Recognition. In this study, the Correlation-based Feature Subset Evaluation (CfsSubsetEval) and Artificial Neural Network (ANN) has been proposed, in order to improve the accuracy rate on the recognition of sign language. The data samples tested were 15 dynamic signs taken from the Malaysian Sign Language (MySL). Pre-processing in this study was based on tracking the joints on a skeleton feature for generating 3D coordinates X, Y, Z. The sample of 3D data coordinates of X, Y, and Z axis is a value relative to the torso and head. In this study, the images has been captured using a kinect sensor based skeletal algorithms. The feature extraction was done by normalizing the position and size of the user, by taking eight out of 20 joints that contribute in identifying the movement of the hands; left hand, right hand, left wrist, right wrist, left elbow, right elbow, torso and head. CfsSubsetEval and Artificial Neural Network have been compared with Consistency-based Subset Evaluation (CSE) and Correlation-based Attribute Evalualtion (CorrelationAttributeEval) for performance analysis on result accuracy. In this study, spherical coordinate conversion process and segmentation frame using mean function were used. The experiments have achieved 95.56 % in accuration rates for Correlation-based Feature Subset Evaluation (CfsSubsetEval). |
---|