Discriminative Training of Bayesian Chow-Liu Multinet Classifiers
Discriminative classifiers such as Support Vector Machines directly learn a discriminant function or a posterior probability model to perform classification. On the other hand, generative classifiers often learn a joint probability model and then use Bayes rules to construct a posterior classifier from this model. In general, generative classifiers are not as accurate as discriminant classifiers. However generative classifiers provide a principled way to handle the missing information problems, which discriminant classifiers cannot easily deal with. To achieve good performances in various classification tasks, it is better to combine these two strategies. In this paper, we develop a novel method to iteratively train a kind of generative Bayesian classifier: Bayesian Chow-Liu Multinet classifier in a discriminative way. Different with the traditional Bayesian Multinet classifiers, our discriminative method adds into the optimization function a penalty item, which represents the divergence between classes. Iterative optimization on this optimization function tries to approximate the dataset as accurately as possible. At the same time, it also tries to make the divergence between classes as big as possible. We state the theoretical justification, outline of the algorithm and also perform a series of experiments to demonstrate the advantages of our method. The experiments results are promising and encouraging.