Supervised text classification is the task of automatically assigning a category label to a previously unlabeled text document. We start with a collection of pre-labeled examples whose assigned categories are used to build a predictive model for each category. In previous research, incorporating semantic features from the WordNet lexical database is one of many approaches that have been tried to improve the predictive accuracy of text classification models. The intuition is that words in the training set alone may not be extensive enough to enable the generation of a universal model for a category, but through WordNet expansion (i.e., incorporating words defined by various relationships in WordNet), a more accurate model may be possible. In this paper, we report preliminary results obtained from a comprehensive study where WordNet features, part of speech tags, and term weighting schemes are incorporated into two-category text classification models generated by both a Naive Bayes text classifier and an SVM text classifier. We characterize the behaviour of these classifiers on fifteen document collections extracted from the Reuters-21578, USENET, DigiTrad, and 20-Newsgroups text corpora. Experimental results show that incorporating WordNet features, utilizing part of speech tags during WordNet expansion, and term weighting schemes have no positive effect on the accuracy of the Naive Bayes and SVM classifiers.
|Cite as: Mansuy, T. and Hilderman, R. (2006). A Characterization of Wordnet Features in Boolean Models For Text Classification. In Proc. Fifth Australasian Data Mining Conference (AusDM2006), Sydney, Australia. CRPIT, 61. Peter, C., Kennedy, P. J., Li, J., Simoff, S. J. and Williams, G. J., Eds. ACS. 103-109. |
(local if available)