A review of feature selection on text classification
Textual data is a high-dimensional data. In high-dimensional data, the number of features xceeds the number of samples. Hence, it equally increased the amount of noise, and irrelevant features. At this point, dimensionality reduction is necessary. Feature selection is an example of dimensionality re...
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
Universiti Malaysia Pahang
2018
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/23030/ http://umpir.ump.edu.my/id/eprint/23030/ http://umpir.ump.edu.my/id/eprint/23030/7/A%20Review%20of%20Feature%20Selection%20on%20Text2.pdf |
Summary: | Textual data is a high-dimensional data. In high-dimensional data, the number of features xceeds the number of samples. Hence, it equally increased the amount of noise, and irrelevant features. At this point, dimensionality reduction is necessary. Feature selection is an example of dimensionality reduction techniques. Moreover, it had been an indispensable component in classification. Thus, in this paper, we presented three feature selection approaches; filter, wrapper and embedded. Their aims, advantages and disadvantages are also briefly explained. Besides, this study reviews several significant studies for each feature selection approach for text classification. Based on the studies, we found that wrapper approach is less used by researchers since it is prone to over-fit and exposed local-optima for text classification while filter and embedded achieved an amount of research. However, in filter approach, the classification accuracies cannot be guaranteed because it does not incorporate with any learning algorithm. Therefore, it concludes that embedded feature selection can offer a promising classification performance regarding classification accuracy and computational time. |
---|