笔记:《The FacT: Taming Latent Factor Models for Explainability with Factorization》

Reading date : 2020.2.25
paper entitled : "The FacT: Taming Latent Factor Models for Explainability with Factorization"

Found the problem :
the existing restricted areas may explain the recommended reasons:

  1. Recommended quality and reliability of the interpretation has long been considered irreconcilable (based on the content of CF).
  2. LFM latent factor model modern recommendation system is the most effective and accurate method, but the complexity of its structure statistics, it is difficult to explain. Although various solutions have been proposed to approximate a potential recommendation mechanism of LFM explain, however, explain the extent to which these approximate factor model in line with the potential acquisition is unknown. [There is no guarantee on the quality of interpretation]

The main contribution :

  • Hidden by the semantic models and rules of interpretation learning seamlessly combined to achieve the recommended interpretable.
  • Integrated LFM regression trees to guide learning, and learning to use a tree structure to explain the latent factor generated. Specifically, constructed by using the user's comments are on the user and articles regression trees, and latent profile associated with each node in the tree to represent users and items. With the growth regression trees, latent factor become finer at the regularization action tree structure. Finally be able to track the path created by the latent profile of each factor to view the return of the tree, thus explaining the recommendations generated. Experiments and user studies show the effectiveness of the model.
    [The fidelity of explanation is guaranteed by modeling the latent factors as a function of explanation rules ]

Summary and sentiment :

  • This article accuracy recommended by existing higher LFM model, there is recommended when applied to explain the "unreliable interpretation" problem departure, the rules will LFM inductive learning together, building a regression tree, similar to the original. " black box "implicit creation of a semantic factor refinement produced, thereby generating a corresponding explanation. This is the main innovation of this paper.
Released four original articles · won praise 0 · Views 30

Guess you like

Origin blog.csdn.net/qq_38871942/article/details/104696210