[论文笔记]TC-LSTM(Tang D,et al. 2015)

Effective LSTMs for Target-Dependent Sentiment Classification

Introduction

  • 情感分析,也叫做观点挖掘,是NLP和计算语言学中的一个重要任务

  • 这篇文章关注于target-dependent sentiment classfication

    • 给定一个句子和一个目标词,推断对于目标词而言的句子情感极性(正面,负面,中立)

    • I bought a new camera. The picture quality is amazing but the battery life is too short.

    • target string: the picture quality expected sentiment polarity: positive

  • 本文解决的目标:如何有效地对目标词和句子中的上下文进行语义关联性建模

The Approach

  • LSTM-->TD-LSTM(considering the target word)-->TC-LSTM(TD-LSTM with target connection, where the semantic relatedness of target with its context words are incorporated)

  • LSTM
    • 每个词经过word embedding后,输入到LSTM,然后最后一个词的hidden vector经过一个线性层,其输出维度是分类的种类数,然后经过softmax变成分类的概率分布,从而实现预测

  • TD-LSTM
    • 用两个LSTM,一个从左往右运行,运行关键词为止,然后另一个也是从右往左,最后两个hidden vector加权,用一个线性层,最后加一个softmax
    • 基本思想是对围绕目标字符串的前后上下文建模,这样两个方向的上下文都可以用作情感分类的特征表示

  • TC-LSTM
    • Select the relevant context words which are helpful to determine the sentiment polarity of a sentence towards the target.
    • 把target words \(\{w_1,w_2,..\}\) 每个词word embedding以后,然后再把vector求平均,形成target vector
    • 然后在每一步输入时,将输入词向量和target vector一起concentrate以后作为输入

Experiment

  • 实验设置
    • 词向量:100维Glove vectors learned from Twitter
    • 随机初始化:均匀分布 U(−0.003,0.003)
    • softmax layer的梯度裁剪阈值为200
    • learning rate=0.01
    • 训练集:6248句子,测试集:692句子
    • 训练和测试集中各类(正,负,中)句子占比25%, 25%, 50%.
    • 评价指标: accuracy and macro-F1 score over positive, negative and neutral categories
  • 实验结果

猜你喜欢

转载自www.cnblogs.com/doragd/p/11246018.html
AL