AAAI 2018 分析

AAAI 2018 分析

word embedding

Learning Sentiment-Specific Word Embedding via Global Sentiment Representation

Context-based word embedding learning approaches can model rich semantic and syntactic information.

However, it is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarities, such as good and bad, are mapped into close word vectors in the embedding space.

Recently, some sentiment embedding learning methods have been proposed, but most of them are designed to work well on sentence-level texts.

Directly applying those models to document-level texts often leads to unsatisfied results.

To address this issue, we present a sentiment-specific word embedding learning architecture that utilizes local context informationas well as global sentiment representation.

The architecture is applicable for both sentence-level and document-level texts.

We take global sentiment representation as a simple average of word embeddings in the text, and use a corruption strategy as a sentiment-dependent regularization.

Extensive experiments conducted on several benchmark datasets demonstrate that the proposed architecture outperforms the state-of-the-art methods for sentiment classification.

《通过全局情绪表示学习特定情绪词的嵌入》

基于上下文的词嵌入学习方法可以对丰富的语义和句法信息进行建模。

然而,对于情感分析却存在问题,因为在嵌入空间中,具有相似语境但对立情感极性的词(如好词和坏词)被映射到封闭词向量中。

扫描二维码关注公众号,回复: 6563042 查看本文章

近年来,人们提出了一些情绪嵌入学习方法,但大多数方法都是为了在句子级的文本中发挥作用。

直接将这些模型应用于文档级文本通常会导致不满意的结果。

为了解决这个问题,我们提出了一种情绪特定的词嵌入学习架构,它利用了局部的背景信息以及全局情绪表示。

该体系结构适用于句子级和文档级文本。

我们将全球情绪表征作为文字嵌入的简单平均值,并将腐败策略作为情绪依赖的规范化。

在多个基准数据集上进行的大量实验表明,所提出的架构优于最先进的情感分类方法。

Using k-Way Co-Occurrences for Learning Word Embeddings

Co-occurrences between two words provide useful insights into the semantics of those words. Consequently, numerous prior work on word embedding learning has used co-occurrences between two words as the training signal for learning word embeddings. However, in natural language texts it is common for multiple words to be related and co-occurring in the same context. We extend the notion of co-occurrences to cover k(≥2)-way co-occurrences among a set of k-words. Specifically, we prove a theoretical relationship between the joint probability of k(≥2) words, and the sum of l_2 norms of their embeddings. Next, we propose a learning objective motivated by our theoretical result that utilize k-way co-occurrences for learning word embeddings. Our experimental results show that the derived theoretical relationship does indeed hold empirically, and despite data sparsity, for some smaller k(≤5) values, k-way embeddings perform comparably or better than 2-way embeddings in a range of tasks.

Semantic Structure-Based Word Embedding by Incorporating Concept Convergence and Word Divergence

Representing the semantics of words is a fundamental task in text processing.

Several research studies have shown that text and knowledge bases (KBs) are complementary sources for word embedding learning.

Most existing methods only consider relationships within word-pairs in the usage of KBs.

We argue that the structural information of well-organized words within the KBs is able to convey more effective and stable knowledge in capturing semantics of words.

In this paper, we propose a semantic structure-based word embedding method, and introduce concept convergence and word divergence to reveal semantic structures in the word embedding learning process.

To assess the effectiveness of our method, we use WordNet for training and conduct extensive experiments on word similarity, word analogy, text classification and query expansion.

The experimental results show that our method outperforms state-of-the-art methods, including the methods trained solely on the corpus, and others trained on the corpus and the KBs.

《基于语义结构的概念收敛和词嵌入发散》

表示词的语义是文本处理中的一项基本任务。

一些研究表明,文本和知识库(KBS)是词嵌入学习的补充源。

在使用KBS时,大多数现有方法只考虑单词对儿内部的关系。

我们认为,知识库中组织良好的单词的结构信息能够更有效、更稳定地传递捕获单词语义的知识。

本文提出了一种基于语义结构的嵌入方法,并引入概念收敛和单词发散来揭示嵌入学习过程中的语义结构。

为了评估该方法的有效性,我们使用WordNet进行了训练,并在单词相似性、单词类比、文本分类和查询扩展等方面进行了广泛的实验。

实验结果表明,我们的方法优于最先进的方法,包括只在语料库上训练的方法,以及在语料库和KBS上训练的方法。

Spectral Word Embedding with Negative Sampling

In this work, we investigate word embedding algorithms in the context of natural language processing. In particular, we examine the notion of ``negative examples'', the unobserved or insignificant word-context co-occurrences, in spectral methods. we provide a new formulation for the word embedding problem by proposing a new intuitive objective function that perfectly justifies the use of negative examples. In fact, our algorithm not only learns from the important word-context co-occurrences, but also it learns from the abundance of unobserved or insignificant co-occurrences to improve the distribution of words in the latent embedded space. We analyze the algorithm theoretically and provide an optimal solution for the problem using spectral analysis. We have trained various word embedding algorithms on articles of Wikipedia with 2.1 billion tokens and show that negative sampling can boost the quality of spectral methods. Our algorithm provides results as good as the state-of-the-art but in a much faster and efficient way.

Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention

Linguistic Inquiry and Word Count (LIWC) is a word counting software tool which has been used for quantitative text analysis in many fields.

Due to its success and popularity, the core lexicon has been translated into Chinese and many other languages.

However, the lexicon only contains several thousand of words, which is deficient compared with the number of common words in Chinese.

Current approaches often require manually expanding the lexicon, but it often takes too much time and requires linguistic experts to extend the lexicon.

To address this issue, we propose to expand the LIWC lexicon automatically.

Specifically, we consider it as a hierarchical classification problem and utilize the Sequence-to-Sequence model to classify words in the lexicon.

Moreover, we use the sememe information with the attention mechanism to capture the exact meanings of a word, so that we can expand a more precise and comprehensive lexicon.

The experimental results show that our model has a better understanding of word meanings with the help of sememes and achieves significant and consistent improvements compared with the state-of-the-art methods.

The source code of this paper can be obtained from https://github.com/thunlp/Auto_CLIWC.

《基于词组嵌入层次分类的汉语词汇扩展》

语言查询和字数统计(LIWC)是一种在许多领域用于定量文本分析的字数统计软件工具。

由于它的成功和普及,核心词汇已被翻译成汉语和许多其他语言。

然而,词汇中只有几千个词,与汉语常用词相比,这是一个不足之处。

目前的方法通常需要手动扩展词汇,但通常需要花费大量的时间,并且需要语言专家来扩展词汇。

为了解决这个问题,我们提出自动扩展LIWC词典。

具体地说,我们将其视为一个层次分类问题,并利用序列-序列模型对词汇进行分类。

此外,我们还利用具有注意机制的义位信息来捕获一个词的确切含义,从而扩展出一个更精确、更全面的词汇。

实验结果表明,我们的模型在义原的帮助下,对词义有了更好的理解,与目前最先进的方法相比,得到了显著和一致的改进。

本文的源代码可从 https://github.com/thunlp/auto_cliwc 获得。

Training and Evaluating Improved Dependency-Based Word Embeddings

Word embedding has been widely used in many natural language processing tasks. In this paper, we focus on learning word embeddings through selective higher-order relationships in sentences to improve the embeddings to be less sensitive to local context and more accurate in capturing semantic compositionality. We present a novel multi-order dependency-based strategy to composite and represent the context under several essential constraints. In order to realize selective learning from the word contexts, we automatically assign the strengths of different dependencies between co-occurred words in the stochastic gradient descent process. We evaluate and analyze our proposed approach using several direct and indirect tasks for word embeddings. Experimental results demonstrate that our embeddings are competitive to or better than state-of-the-art methods and significantly outperform other methods in terms of context stability. The output weights and representations of dependencies obtained in our embedding model conform to most of the linguistic characteristics and are valuable for many downstream tasks.

word representation

Learning Multimodal Word Representation via Dynamic Fusion Methods

Multimodal models have been proven to outperform text-based models on learning semantic word representations. Almost all previous multimodal models typically treat the representations from different modalities equally. However, it is obvious that information from different modalities contributes differently to the meaning of words. This motivates us to build a multimodal model that can dynamically fuse the semantic representations from different modalities according to different types of words. To that end, we propose three novel dynamic fusion methods to assign importance weights to each modality, in which weights are learned under the weak supervision of word association pairs. The extensive experiments have demonstrated that the proposed methods outperform strong unimodal baselines and state-of-the-art multimodal models.

Learning Multi-Modal Word Representation Grounded in Visual Context

Representing the semantics of words is a long-standing problem for the natural language processing community.

Most methods compute word semantics given their textual context in large corpora.

More recently, researchers attempted to integrate perceptual and visual features.

Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear.

We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings.

We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model.

We provide experiments and extensive analysis of the obtained results.

《基于视觉语境的多模态词表示学习》

词的语义表示是自然语言处理界长期存在的问题。

大多数方法根据大语料库中的文本上下文计算单词语义。

最近,研究人员试图整合感知和视觉特征。

这些作品大多考虑对象的视觉外观以增强单词表示,但它们忽略了对象出现的视觉环境和上下文。

我们建议将基于文本的技术与基于视觉的技术统一起来,同时利用文本和视觉上下文来学习多模态单词嵌入。

我们探讨了作为可视上下文的各种选择,并提出了一种将可视上下文元素集成到多模式跳图模型中的端到端方法。

我们对所得结果进行了实验和广泛的分析。

猜你喜欢

转载自www.cnblogs.com/fengyubo/p/11067707.html