Multi−Type Disentanglement without Adversarial Training
Lei Sha and Thomas Lukasiewicz
Controlling the style of natural language by disentangling the latent space is an important step towards interpretable machine learning. After the latent space is disentangled, the style of a sentence can be transformed by tuning the style representation without affecting other features of the sentence. Previous works usually use adversarial training to guarantee that disentangled vectors do not affect each other. However, adversarial methods are difficult to train. Especially when we are extracting multiple style vectors corresponding to different features (e.g. sentiment, or tense, which we call style genres in this paper), separate discriminators are required for each of these features. In this paper, we propose a unified distribution controlling method, that provides each specific style type (the value of style genres, e.g., positive sentiment, or past tense) with a unique representation. This method contributes a solid theoretical basis to avoid adversarial training in multi-genre disentanglement. We also propose multiple loss functions to achieve the style-content disentanglement as well as disentanglement among multiple style genres. In addition, our method can also alleviate the training bias of multiple genres caused by the dataset. We conduct experiments on two datasets (Yelp service reviews and Amazon product reviews) to evaluate the style-disentangling effect and the unsupervised style-transfer performance on two style genres: sentiment and tense. Experimental results show the effectiveness of our model.