基于Aprion算法的电影推荐

前言:
最近在参加比赛,选了推荐系统的赛题。接触到了各种推荐系统的算法,学习了许多大神的论文。非常感谢他们的科研,让我们能更注重于应用。这篇文章权当做个引子,后续会补充我比赛的具体。

知识储备:

Apriori算法可以说是经典的亲和性分析算法。它只从数据集中频繁出现的商品中选取共同出
现的商品组成频繁项集(frequent itemset),避免了上述复杂度呈指数级增长的问题。一旦找到
频繁项集,生成关联规则就很容易了。
Apriori算法背后的原理简洁却不失巧妙。首先,确保了规则在数据集中有足够的支持度。
Apriori算法的一个重要参数就是最小支持度。比如,要生成包含商品A、 B的频繁项集(A, B),
要求支持度至少为30,那么A和B都必须至少在数据集中出现30次。更大的频繁项集也要遵守该
项约定,比如要生成频繁项集(A, B, C, D),那么子集(A, B, C)必须是频繁项集(当然D自己也
要满足最小支持度标准)。
生成频繁项集后,将不再考虑其他可能的却不够频繁的项集(这样的集合有很多),从而大
大减少测试新规则所需的时间。

——引用自《python数据挖掘入门与实践》
本文所用数据集为MovieLen的,上传至文底网盘

全部代码:

import os
import pandas as pd
import sys

data_folder = 'D:\\Python\\PythonProject\\movie_recommend_test\\data'
ratings_filename = os.path.join(data_folder,'u.data')

all_ratings = pd.read_csv(ratings_filename,delimiter='\t',
    header=None,names=['UserID','MovieID','Rating','Datetime'])
all_ratings['Datetime'] = pd.to_datetime(all_ratings['Datetime'],unit='s')

#确定用户是不是喜欢某一部电影
all_ratings['Favorable'] = all_ratings['Rating'] > 3
#取前200名用户的打分数据作训练集
ratings = all_ratings[all_ratings['UserID'].isin(range(200))]
favorable_ratings = ratings[ratings['Favorable']]#只包括用户喜欢某部电影所在的行
#用户为键,喜欢的电影集合为值(掌握groupby()方法)
favorable_review_by_users = dict((k,frozenset(v.values)) for k,v in favorable_ratings
    .groupby('UserID')['MovieID'])
num_favorable_by_movie = ratings[['MovieID','Favorable']].groupby('MovieID').sum()
#开始应用Aprion算法
frequent_itemsets = {}#以项集长度为键,频繁项集为值的字典
min_support = 50
frequent_itemsets[1] = dict((frozenset((movie_id,)),
    row['Favorable']) for movie_id,row in num_favorable_by_movie.iterrows()
    if row['Favorable']>min_support)
from collections import defaultdict
def find_frequent_itemsets(favorable_review_by_users,k_1_itemsets,min_support):
    counts = defaultdict(int)
    for user,reviews in favorable_review_by_users.items():
        for itemset in k_1_itemsets:
            if itemset.issubset(reviews):
                for other_reviewed_movie in reviews - itemset:
                    current_superset = itemset | frozenset((other_reviewed_movie,))
                    counts[current_superset] += 1
    return dict([(itemset,frequency) for itemset,frequency in counts.items() 
        if frequency>=min_support])
#创建循环,运行Apriori算法,存储算法运行过程中发现的新项集
for k in range(2,20):
    cur_frequent_itemsets = find_frequent_itemsets(favorable_review_by_users,
    frequent_itemsets[k-1],min_support)
    frequent_itemsets[k] = cur_frequent_itemsets
    if len(cur_frequent_itemsets) == 0:
        print('Did not find any frequent itemsets of length {}'.format(k))
        sys.stdout.flush()
        break
    else:
        print('I found {} frequency itemsets of length {}'.format(len(cur_frequent_itemsets),k))
        sys.stdout.flush()

del frequent_itemsets[1]
#遍历不同长度的频繁项集,为每个项集生成规则
candidate_rules = []
for itemset_length,itemset_counts in frequent_itemsets.items():
    for itemset in itemset_counts.keys():
        for conclusion in itemset:
            premise = itemset - set((conclusion,))
            candidate_rules.append((premise,conclusion))
#计算每条规则的置信度
correct_counts = defaultdict(int)
incorrect_counts = defaultdict(int)
for user,reviews in favorable_review_by_users.items():
    for candidate_rule in candidate_rules:
        premise,conclusion = candidate_rule
        if premise.issubset(reviews):
            if conclusion in reviews:
                correct_counts[candidate_rule] += 1
            else:
                incorrect_counts[candidate_rule] += 1
rule_confidence = {candidate_rule:correct_counts[candidate_rule]
    /float(correct_counts[candidate_rule]+incorrect_counts[candidate_rule])
    for candidate_rule in candidate_rules}
#对置信度字典进行排序后,输出置信度最高的前五条规则
from operator import itemgetter
sorted_confidence = sorted(rule_confidence.items(),key=itemgetter(1),reverse=True)
for index in range(5):
    print('Rule #{0}'.format(index+1))
    (premise,conclusion) = sorted_confidence[index][0]
    print('Rule:If a person recommends {0} they will also recommend {1}'
    .format(premise,conclusion))
    print('- Confidence:{0:.3f}'.format(rule_confidence[(premise,conclusion)]))
    print('')
#电影编号对应电影名字
movie_name_filename = os.path.join(data_folder,'u.item')
movie_name_data = pd.read_csv(movie_name_filename,delimiter='|',header=None,encoding='mac-roman')
movie_name_data.columns = ['MovieID','Title','Release Date','Video Release',
    'IMDB','<UNK>','Action','Adventure','Animation',"Children's",'Comedy',
    'Crime','Documentary','Drama','Fantasy','Film-Noir','Horror',
    'Muxical','Mystery','Romance','Sci-Fi','Thriller','War','Western']
def get_movie_name(movie_id):
    title_object = movie_name_data[movie_name_data['MovieID']==movie_id]['Title']
    title = title_object.values[0]
    return title
for index in range(5):
    print('Rule #{0}'.format(index+1))
    (premise,conclusion) = sorted_confidence[index][0]
    premise_names = ', '.join(get_movie_name(idx) for idx in premise)
    conclusion_name = get_movie_name(conclusion)
    print('Rule:If a person recommends {0} they will also recommend {1}'.
        format(premise_names,conclusion_name))
    print('-Confidence:{0:.3f}'.format(rule_confidence[(premise,conclusion)]))
    print('')


代码分析:
1.数据预处理

data_folder = 'D:\\Python\\PythonProject\\movie_recommend_test\\data'
ratings_filename = os.path.join(data_folder,'u.data')

all_ratings = pd.read_csv(ratings_filename,delimiter='\t',
    header=None,names=['UserID','MovieID','Rating','Datetime'])
all_ratings['Datetime'] = pd.to_datetime(all_ratings['Datetime'],unit='s')

2.选取前200位用户,并得到他们喜欢的所有电影(字典形式)

#确定用户是不是喜欢某一部电影
all_ratings['Favorable'] = all_ratings['Rating'] > 3
#取前200名用户的打分数据作训练集
ratings = all_ratings[all_ratings['UserID'].isin(range(200))]
favorable_ratings = ratings[ratings['Favorable']]#只包括用户喜欢某部电影所在的行
#用户为键,喜欢的电影集合为值(掌握groupby()方法)
favorable_review_by_users = dict((k,frozenset(v.values)) for k,v in favorable_ratings
    .groupby('UserID')['MovieID'])
num_favorable_by_movie = ratings[['MovieID','Favorable']].groupby('MovieID').sum()

关于gorupby()方法

pandas提供了一个灵活高效的groupby功能,它使你能以一种自然的方式对数据集进行切片、切块、摘要等操作。根据一个或多个键(可以是函数、数组或DataFrame列名)拆分pandas对象。计算分组摘要统计,如计数、平均值、标准差,或用户自定义函数。
http://blog.csdn.net/leonis_v/article/details/51832916

3.构建循环部分,返回符合最小支持度的频繁项集

frequent_itemsets = {}#以项集长度为键,频繁项集为值的字典
min_support = 50
frequent_itemsets[1] = dict((frozenset((movie_id,)),
    row['Favorable']) for movie_id,row in num_favorable_by_movie.iterrows()
    if row['Favorable']>min_support)
from collections import defaultdict
def find_frequent_itemsets(favorable_review_by_users,k_1_itemsets,min_support):
    counts = defaultdict(int)
    for user,reviews in favorable_review_by_users.items():
        for itemset in k_1_itemsets:
            if itemset.issubset(reviews):
                for other_reviewed_movie in reviews - itemset:
                    current_superset = itemset | frozenset((other_reviewed_movie,))
                    counts[current_superset] += 1
    return dict([(itemset,frequency) for itemset,frequency in counts.items() 
        if frequency>=min_support])

关于frozenset():

http://www.runoob.com/python/python-func-frozenset.html

关于issubset():

检查是否超集
http://blog.csdn.net/heatdeath/article/details/71247717

关于dict():

dict() 函数用于创建一个字典。
参数说明:
**kwargs – 关键字
mapping – 元素的容器。
iterable – 可迭代对象。(元组,列表等)
http://www.runoob.com/python/python-func-dict.html

4.开始循环:

#创建循环,运行Apriori算法,存储算法运行过程中发现的新项集
for k in range(2,20):
    cur_frequent_itemsets = find_frequent_itemsets(favorable_review_by_users,
    frequent_itemsets[k-1],min_support)
    frequent_itemsets[k] = cur_frequent_itemsets
    if len(cur_frequent_itemsets) == 0:
        print('Did not find any frequent itemsets of length {}'.format(k))
        sys.stdout.flush()
        break
    else:
        print('I found {} frequency itemsets of length {}'.format(len(cur_frequent_itemsets),k))
        sys.stdout.flush()

关于sys.stdout.flush():

用sys.stdout.flush()方法,确保代码还在运行时,把缓冲区内容输出
到终端。有时,在位于笔记本文件特定格子的大型循环中,代码结束运行前不会
输出,用flush方法可以保证及时输出。但是,该方法不宜过多使用——flush
操作(输出也是)所带来的计算开销会拖慢程序的运行速度

5.生成规则,前提和结论

#遍历不同长度的频繁项集,为每个项集生成规则
candidate_rules = []
for itemset_length,itemset_counts in frequent_itemsets.items():
    for itemset in itemset_counts.keys():
        for conclusion in itemset:
            premise = itemset - set((conclusion,))#规则的前提
            candidate_rules.append((premise,conclusion))

6.计算置信度

#计算每条规则的置信度
correct_counts = defaultdict(int)
incorrect_counts = defaultdict(int)
for user,reviews in favorable_review_by_users.items():
    for candidate_rule in candidate_rules:
        premise,conclusion = candidate_rule
        if premise.issubset(reviews):
            if conclusion in reviews:
                correct_counts[candidate_rule] += 1
            else:
                incorrect_counts[candidate_rule] += 1
rule_confidence = {candidate_rule:correct_counts[candidate_rule]
    /float(correct_counts[candidate_rule]+incorrect_counts[candidate_rule])
    for candidate_rule in candidate_rules}

7.两种输出方式(电影ID和电影名字)

#对置信度字典进行排序后,输出置信度最高的前五条规则
from operator import itemgetter
sorted_confidence = sorted(rule_confidence.items(),key=itemgetter(1),reverse=True)
for index in range(5):
    print('Rule #{0}'.format(index+1))
    (premise,conclusion) = sorted_confidence[index][0]
    print('Rule:If a person recommends {0} they will also recommend {1}'
    .format(premise,conclusion))
    print('- Confidence:{0:.3f}'.format(rule_confidence[(premise,conclusion)]))
    print('')
#电影编号对应电影名字
movie_name_filename = os.path.join(data_folder,'u.item')
movie_name_data = pd.read_csv(movie_name_filename,delimiter='|',header=None,encoding='mac-roman')
movie_name_data.columns = ['MovieID','Title','Release Date','Video Release',
    'IMDB','<UNK>','Action','Adventure','Animation',"Children's",'Comedy',
    'Crime','Documentary','Drama','Fantasy','Film-Noir','Horror',
    'Muxical','Mystery','Romance','Sci-Fi','Thriller','War','Western']
def get_movie_name(movie_id):
    title_object = movie_name_data[movie_name_data['MovieID']==movie_id]['Title']
    title = title_object.values[0]
    return title
for index in range(5):
    print('Rule #{0}'.format(index+1))
    (premise,conclusion) = sorted_confidence[index][0]
    premise_names = ', '.join(get_movie_name(idx) for idx in premise)
    conclusion_name = get_movie_name(conclusion)
    print('Rule:If a person recommends {0} they will also recommend {1}'.
        format(premise_names,conclusion_name))
    print('-Confidence:{0:.3f}'.format(rule_confidence[(premise,conclusion)]))
    print('')

输出:

>>> 
== RESTART: D:\Python\PythonProject\movie_recommend_test\recommend_movie.py ==
I found 93 frequency itemsets of length 2
I found 295 frequency itemsets of length 3
I found 593 frequency itemsets of length 4
I found 785 frequency itemsets of length 5
I found 677 frequency itemsets of length 6
I found 373 frequency itemsets of length 7
I found 126 frequency itemsets of length 8
I found 24 frequency itemsets of length 9
I found 2 frequency itemsets of length 10
Did not find any frequent itemsets of length 11
Rule #1
Rule:If a person recommends frozenset({98, 181}) they will also recommend 50
- Confidence:1.000

Rule #2
Rule:If a person recommends frozenset({172, 79}) they will also recommend 174
- Confidence:1.000

Rule #3
Rule:If a person recommends frozenset({258, 172}) they will also recommend 174
- Confidence:1.000

Rule #4
Rule:If a person recommends frozenset({1, 181, 7}) they will also recommend 50
- Confidence:1.000

Rule #5
Rule:If a person recommends frozenset({1, 172, 7}) they will also recommend 174
- Confidence:1.000

Rule #1
Rule:If a person recommends Silence of the Lambs, The (1991), Return of the Jedi (1983) they will also recommend Star Wars (1977)
-Confidence:1.000
Rule #2
Rule:If a person recommends Empire Strikes Back, The (1980), Fugitive, The (1993) they will also recommend Raiders of the Lost Ark (1981)
-Confidence:1.000

Rule #3
Rule:If a person recommends Contact (1997), Empire Strikes Back, The (1980) they will also recommend Raiders of the Lost Ark (1981)
-Confidence:1.000

Rule #4
Rule:If a person recommends Toy Story (1995), Return of the Jedi (1983), Twelve Monkeys (1995) they will also recommend Star Wars (1977)
-Confidence:1.000

Rule #5
Rule:If a person recommends Toy Story (1995), Empire Strikes Back, The (1980), Twelve Monkeys (1995) they will also recommend Raiders of the Lost Ark (1981)
-Confidence:1.000

最后: 对你有帮助点个赞呗qaq

链接:https://pan.baidu.com/s/1JncOmPCHCvzlAM1xQEUryg 密码:e9d4

———关注我的公众号,一起学数据挖掘————
这里写图片描述

猜你喜欢

转载自blog.csdn.net/crozonkdd/article/details/79587040