title
-
Application scope and scenarios
In natural language processing applications, applications such as word segmentation, part-of-speech analysis, syntactic analysis, and English recognition are often used. Stanford NLP is not bad in Chinese support, which is convenient for us to support and verify applications faster feasibility. -
Environment configuration and construction process
Download stanford-corenlp-full-2017-06-09.zip , then unzip it in the system, download the Chinese model stanford-chinese-corenlp-2017-06-09-models.jar supported by the package, Note that the name needs to correspond, otherwise it is easy to cause bugs. Put the decompressed file of the model into the decompressed file of the first package, as shown in the figure below:
Create a new startup script as shown in the figure above: pay attention to specifying the model file in Chinese.
-
Start effect
-
Language support
Let's start here, so we can refer to it for friends.
class service_ana(object):
"""
对斯坦福服务的包装a
"""
def __init__(self):
pass
#需要提前启动服务
self.nlp = StanfordCoreNLP('http://localhost:9000')
def get_res(self, text):
"""参数是多个句子的列表(分词和不分词都行), 返回各个句子的分析结果"""
output = self.nlp.annotate(text, properties={
'annotators': 'tokenize,ssplit,pos,ner,depparse,parse',
'outputFormat': 'json'
})
print(output['sentences'][0]['parse'])
print(output['sentences'][0]['basicDependencies'])
print(output['sentences'][0]['enhancedDependencies'])
print(output['sentences'][0]['enhancedPlusPlusDependencies'])
print(output['sentences'][0]['tokens'])
- Result analysis and extraction:
{"intent_res": "Do you like apples", "zhu_yu": "you", "bin_yu": "apples", "wei_yu": "like", "intent_index": "Do you like apples" I like it"}
**tips:** Through this method of analyzing sentence components, the analysis of intentions is suitable for relatively concentrated and fine-grained intentions in the scene. The coarser intentions can be realized by classification. In addition, when it comes to syntactic analysis, long sentences are not suitable. The speed is slower.