Creat membership Creat membership
Sign in

Forgot password?

Confirm
  • Forgot password?
    Sign Up
  • Confirm
    Sign In
Creat membership Creat membership
Sign in

Forgot password?

Confirm
  • Forgot password?
    Sign Up
  • Confirm
    Sign In
Collection

toTop

If you have any feedback, Please follow the official account to submit feedback.

Turn on your phone and scan

home > search >

Adversarial Distillation for Efficient Recommendation with External Knowledge

Author:
Chen, Xu  Zhang, Yongfeng  Xu, Hongteng  Qin, Zheng  Zha, Hongyuan  


Journal:
ACM TRANSACTIONS ON INFORMATION SYSTEMS


Issue Date:
2019


Abstract(summary):

Integrating external knowledge into the recommendation system has attracted increasing attention in both industry and academic communities. Recent methods mostly take the power of neural network for effective knowledge representation to improve the recommendation performance. However, the heavy deep architectures in existing models are usually incorporated in an embedded manner, which may greatly increase the model complexity and lower the runtime efficiency. To simultaneously take the power of deep learning for external knowledge modeling as well as maintaining the model efficiency at test time, we reformulate the problem of recommendation with external knowledge into a generalized distillation framework. The general idea is to free the complex deep architecture into a separate model, which is only used in the training phrase, while abandoned at test time. In particular, in the training phrase, the external knowledge is processed by a comprehensive teacher model to produce valuable information to teach a simple and efficient student model. Once the framework is learned, the teacher model is abandoned, and only the succinct yet enhanced student model is used to make fast predictions at test time. In this article, we specify the external knowledge as user review, and to leverage it in an effective manner, we further extend the traditional generalized distillation framework by designing a Selective Distillation Network (SDNet) with adversarial adaption and orthogonality constraint strategies to make it more robust to noise information. Extensive experiments verify that our model can not only improve the performance of rating prediction, but also can significantly reduce time consumption when making predictions as compared with several stateof-the-art methods.


Similar Literature

Submit Feedback

This function is a member function, members do not limit the number of downloads