Compare commits
14 Commits
988a91e995
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| 43c7ce9d6a | |||
| 763276d623 | |||
| 6fd3b67ed8 | |||
| 976d60981f | |||
|
|
cd21b384d3 | ||
|
|
514346075d | ||
|
|
923a0ea5ae | ||
|
|
87f96e06f7 | ||
|
|
e648b7823f | ||
|
|
43ee53bc69 | ||
|
|
43c55193c9 | ||
|
|
b4bd7e305a | ||
|
|
2ab736a59e | ||
|
|
7ccd3f9daa |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -23,4 +23,4 @@
|
||||
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
||||
hs_err_pid*
|
||||
|
||||
.idea
|
||||
.idea/
|
||||
|
||||
8
.idea/.gitignore
generated
vendored
8
.idea/.gitignore
generated
vendored
@@ -1,8 +0,0 @@
|
||||
# Default ignored files
|
||||
/shelf/
|
||||
/workspace.xml
|
||||
# Editor-based HTTP Client requests
|
||||
/httpRequests/
|
||||
# Datasource local storage ignored files
|
||||
/dataSources/
|
||||
/dataSources.local.xml
|
||||
24
README.md
24
README.md
@@ -1,10 +1,15 @@
|
||||
# JAVA推荐系统-基于用户和物品协同过滤的电影推荐
|
||||
|
||||
|
||||
#### 系统原理
|
||||
该系统使用java编写的基于用户的协同过滤算法(UserCF)和基于物品(此应用中指电影)的协同过滤(ItemtemCF)
|
||||
利用统计学的相关系数经常皮尔森(pearson)相关系数计算相关系数来实现千人千面的推荐系统。
|
||||
|
||||
|
||||
## 数据集介绍
|
||||
|
||||
https://grouplens.org/datasets/movielens/100k/
|
||||
|
||||
#### 协同过滤算法
|
||||
协同过滤推荐算法是诞生最早,并且较为著名的推荐算法。主要的功能是预测和推荐。协同过滤(Collaborative Filtering,简写CF)是推荐系统最重要得思想之一,其思想是根据用户之前得喜好以及其他兴趣相近得用户得选择来给用户推荐物品(基于对用户历史行为数据的挖掘发现用户的喜好偏向,并预测用户可能喜好的产品进行推荐),一般仅仅基于用户的行为数据(评价,购买,下载等),而不依赖于物品的任何附加信息(物品自身特征)或者用户的任何附加信息(年龄,性别等)。其思想总的来说就是:人以类聚,物以群分。
|
||||
目前应用比较广泛的协同过滤算法是基于邻域的方法,而这种方法主要有两种算法:
|
||||
@@ -162,17 +167,16 @@ Spring boot单项目
|
||||
|
||||
3.项目中用到的文件数据集ml-100k 在 src / main / resources目录下
|
||||
|
||||
#### 技术交流&问题反馈
|
||||
|
||||
刚刚整理的代码还有很多不足之处,如有问题请联系我
|
||||
|
||||
联系QQ:1334512682
|
||||
微信号:vxhqqh
|
||||
|
||||
#### 我的博客
|
||||
|
||||
[洛阳泰山](https://blog.csdn.net/weixin_40986713)
|
||||
|
||||
|
||||
|
||||
#### 推荐阅读
|
||||
|
||||
[推荐算法专栏](https://blog.csdn.net/weixin_40986713/category_12268014.html?spm=1001.2014.3001.5482)
|
||||
|
||||
|
||||
|
||||
#### 常见问题
|
||||
|
||||
[点击查看](https://gitee.com/taisan/recommend_system/issues?assignee_id=&author_id=&branch=&collaborator_ids=&issue_search=&label_ids=&label_text=&milestone_id=&priority=&private_issue=&program_id=&project_id=taisan%2Frecommend_system&project_type=&scope=&single_label_id=&single_label_text=&sort=&state=closed&target_project=)
|
||||
|
||||
|
||||
167
docs/examples/课程推荐计算评分.md
Normal file
167
docs/examples/课程推荐计算评分.md
Normal file
@@ -0,0 +1,167 @@
|
||||
项目地址:[https://gitea.suimu.site/lennon/recommend_system](https://gitea.suimu.site/lennon/recommend_system)
|
||||
|
||||
项目中给出两个算法,一个是基于用户的协同过滤算法,一个是基于物品的协同过滤算法。
|
||||
|
||||
|
||||
|
||||
# 数据处理
|
||||
## 原始数据集说明
|
||||
用户与课程关联数据集, 行数说明
|
||||
|
||||
| 字段 | 名称 | 示例值 | 描述 | 取值范围 |
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
| views | 浏览记录 | 60% | 用户对课程浏览进度 | (0%,100%) |
|
||||
| favorites | 收藏记录 | 0 | 用户是否收藏课程,1 为收藏 | enum(0,1) |
|
||||
| likes | 点赞记录 | 1 | 用户是否点赞课程,1 为点赞 | enum(0,1) |
|
||||
| comments | 评论记录 | ["Loved it", "Would buy again"] | 用户对课程的评论,为字符串数组 | ["Great product!", "Loved it", "Would buy again"] |
|
||||
| shares | 分享记录 | 1 | 用户是否分享课程,1 为分享 | enum(0,1) |
|
||||
| feedbacks | 反馈记录 | [ "Shipping was fast"] | 用户对课程的反馈,为字符串数组 | ["The product was good", "Shipping was fast"] |
|
||||
| ratings | 评分记录 | 3 | 用户对课程的评分 | (1,5) |
|
||||
|
||||
|
||||
|
||||
|
||||
## 中间状态,文本情感计算
|
||||
处理文本情感之后的数据,示例如下:
|
||||
|
||||
| **用户编码** | **课程编码** | **浏览记录** | **收藏记录** | **点赞记录** | **评论记录** | **分享记录** | **反馈记录** | **评分记录** |
|
||||
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||
| 1 | 1 | 0.28 | 1 | 0 | 0.25 | 1 | 0.87 | 1 |
|
||||
| 1 | 2 | 0.49 | 0 | 1 | 0.76 | 0 | 0.65 | 3 |
|
||||
|
||||
|
||||
其中,评论记录和反馈记录通过 NLP 的情感分析,得到 (0,1)之间的两位小数。偏向 1 表示正向情感。
|
||||
|
||||
```python
|
||||
pip install snownlp
|
||||
```
|
||||
|
||||
```python
|
||||
from snownlp import SnowNLP
|
||||
|
||||
text1 = "这个产品真的很好用!"
|
||||
s = SnowNLP(text1)
|
||||
print(s.sentiments) # 输出情感得分 0.8380894562907347
|
||||
|
||||
|
||||
from snownlp import SnowNLP
|
||||
|
||||
text = "好烦啊,和参数对不上!"
|
||||
s = SnowNLP(text)
|
||||
print(s.sentiments) # 输出情感得分 0.2734196629160368
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 计算 User-Item 的评分
|
||||
在原始 ml-100k 数据集中,用户和电影之间的评分是手动打的,本项目中的实现逻辑也比较简单。计算物品或者用户的邻居逻辑都在类 `CoreMath`中。所以这边需要根据 浏览记录、收藏记录、点赞记录、评论记录、反馈记录、评分记录等信息计算出一个评分。
|
||||
|
||||
实现逻辑是将所有的信息都标准化为一个 0 到 1 之间数,然后按照不同信息的重要程度给一个权重。计算出一个 1 到 5 之间的分数。这样就不需要改动原有算法的代码了。
|
||||
|
||||
### 权重分配说明:
|
||||
+ **浏览记录 (views)**: 虽然浏览行为重要,但它属于较为被动的行为。建议赋予较低的权重。
|
||||
+ **收藏记录 (favorites)**: 收藏表明用户对产品有一定的兴趣,建议赋予中等权重。
|
||||
+ **点赞记录 (likes)**: 点赞表示用户的积极反馈,建议赋予中等偏高的权重。
|
||||
+ **评论记录 (comments)**: 评论能直接反映用户的想法,建议赋予较高的权重。
|
||||
+ **分享记录 (shares)**: 分享表明用户愿意向他人推荐产品,建议赋予中等偏高的权重。
|
||||
+ **反馈记录 (feedbacks)**: 反馈通常比评论更详细,建议赋予较高的权重。
|
||||
+ **评分记录 (ratings)**: 评分是最直接的用户评分,建议赋予最高的权重。
|
||||
|
||||
### 权重分配建议
|
||||
```python
|
||||
weights = {
|
||||
'views': 0.05, # 浏览记录:较低权重
|
||||
'favorites': 0.1, # 收藏记录:中等权重
|
||||
'likes': 0.15, # 点赞记录:中等偏高权重
|
||||
'comments': 0.2, # 评论记录:较高权重
|
||||
'shares': 0.15, # 分享记录:中等偏高权重
|
||||
'feedbacks': 0.2, # 反馈记录:较高权重
|
||||
'ratings': 0.15 # 评分记录:最高权重
|
||||
}
|
||||
```
|
||||
|
||||
### 代码示例
|
||||
```python
|
||||
from typing import Dict
|
||||
from snownlp import SnowNLP
|
||||
|
||||
def calculate_composite_score(
|
||||
views: float,
|
||||
favorites: int,
|
||||
likes: int,
|
||||
comments: list[str],
|
||||
shares: int,
|
||||
feedbacks: list[str],
|
||||
rating: int,
|
||||
weights: Dict[str, float] = None
|
||||
) -> float:
|
||||
if weights is None:
|
||||
print("No weights provided, using default values.")
|
||||
weights = {
|
||||
'views': 0.01,
|
||||
'favorites': 0.1,
|
||||
'likes': 0.125,
|
||||
'comments': 0.175,
|
||||
'shares': 0.125,
|
||||
'feedbacks': 0.175,
|
||||
'rating': 0.29
|
||||
}
|
||||
print(f"Weights: {weights}")
|
||||
|
||||
# 量化 comments 和 feedback 数据,如果列表为空则默认为0
|
||||
avg_comment_score = np.mean([SnowNLP(comment).sentiments for comment in comments]) if comments else 0
|
||||
avg_feedback_score = np.mean([SnowNLP(feedback).sentiments for feedback in feedbacks]) if feedbacks else 0
|
||||
|
||||
# 格式化为两位小数
|
||||
avg_comment_score_formatted = round(avg_comment_score, 2)
|
||||
avg_feedback_score_formatted = round(avg_feedback_score, 2)
|
||||
|
||||
print(f"Average comment score: {avg_comment_score_formatted}")
|
||||
print(f"Average feedback length: {avg_feedback_score_formatted}")
|
||||
|
||||
# 将评分数据缩放到 0-1
|
||||
scale_rating = rating * 0.2
|
||||
|
||||
# Calculate the weighted score
|
||||
score = (
|
||||
views * weights['views'] +
|
||||
favorites * weights['favorites'] +
|
||||
likes * weights['likes'] +
|
||||
avg_comment_score_formatted * weights['comments'] +
|
||||
shares * weights['shares'] +
|
||||
avg_feedback_score_formatted * weights['feedbacks'] +
|
||||
scale_rating * weights['rating']
|
||||
)
|
||||
|
||||
print(f"Score: {score}")
|
||||
|
||||
# Ensure the score is in the range [1, 5]
|
||||
score = max(1, min(5, score * 5))
|
||||
|
||||
return round(score, 2)
|
||||
|
||||
|
||||
# 示例用法
|
||||
views = 75 * 0.01 # 假设 75% 的用户浏览了这个 item
|
||||
favorites = 1
|
||||
likes = 0
|
||||
comments = ["非常棒的产品!", "超爱的", "下次还买"]
|
||||
shares = 1
|
||||
feedbacks = ["产品很好", "发货速度很快"]
|
||||
rating = 5
|
||||
|
||||
composite_score = calculate_composite_score(views, favorites, likes, comments, shares, feedbacks, rating)
|
||||
print("Composite Score:", composite_score)
|
||||
```
|
||||
|
||||
# 算法缺点
|
||||
|
||||
## 问题
|
||||
这种融合的计算方式会导致辛普森悖论
|
||||
|
||||

|
||||
|
||||
## 改进方案
|
||||
|
||||
将 1-5 的分数值换成多维的评分向量,比如 [0.2, 0.3, 0.5, 0.1, 0.1],这样每个维度的权重可以不同,
|
||||
当然计算时消耗的资源也会增加。
|
||||
197
docs/generate_scores.md
Normal file
197
docs/generate_scores.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# 生成 User-Item 分数
|
||||
|
||||
## 什么是用户画像
|
||||
在推荐系统中,用户画像(User Profiling)是系统根据用户的行为、属性和兴趣,构建出用户特征的过程。一个良好的用户画像能够提高推荐的精准度。以下是一些关键因素:
|
||||
|
||||
### 1. **用户行为**
|
||||
- **浏览历史**:用户在平台上浏览过的内容,包括网页、商品、视频等。
|
||||
- **搜索记录**:用户在搜索栏中输入的关键词,可以反映其当前的兴趣点。
|
||||
- **点击行为**:用户点击了哪些推荐内容,表明了对哪些内容感兴趣。
|
||||
- **购买/消费记录**:用户的购买历史记录,特别是在电商或内容平台上。
|
||||
- **时间偏好**:用户在一天中的特定时间或周中的特定日子里更活跃。
|
||||
- **设备信息**:用户使用的设备类型(如手机、平板、PC)也可以影响推荐内容的形式。
|
||||
|
||||
### 2. **用户属性**
|
||||
- **人口统计学信息**:包括用户的年龄、性别、地区等。
|
||||
- **职业和收入**:用户的职业和收入水平会影响其消费能力和偏好。
|
||||
- **社交关系**:用户在社交网络中的好友、关注关系可以帮助推测其兴趣和喜好。
|
||||
- **教育背景**:用户的教育水平可能会影响其对某些内容的偏好。
|
||||
|
||||
### 3. **兴趣和偏好**
|
||||
- **兴趣标签**:通过分析用户的行为数据,可以生成用户的兴趣标签(如喜欢体育、科技、音乐等)。
|
||||
- **内容偏好**:用户偏好特定类型的内容,比如新闻类、娱乐类、教育类等。
|
||||
- **品牌偏好**:用户对某些品牌的特别偏好。
|
||||
|
||||
### 4. **情感和情绪**
|
||||
- **情感分析**:通过用户的评论、反馈、或者社交媒体互动,可以分析其当前的情感状态。
|
||||
- **情绪变化**:用户情绪的短期波动(如最近几天的情绪)也会影响其推荐内容的接受度。
|
||||
|
||||
### 5. **位置和环境**
|
||||
- **地理位置**:用户当前的物理位置或最近的活动区域可以影响推荐(如本地化推荐)。
|
||||
- **环境上下文**:用户当前所处的环境(如家中、公司、户外)也可能影响其偏好。
|
||||
|
||||
### 6. **社交互动**
|
||||
- **评论和评价**:用户对商品、内容的评价可以揭示其偏好。
|
||||
- **分享行为**:用户分享的内容可以反映出其认为有价值或感兴趣的东西。
|
||||
- **社交媒体互动**:用户在社交平台上的互动也能为画像提供有价值的输入。
|
||||
|
||||
这些因素的综合应用,可以帮助推荐系统更好地理解用户,并提供个性化的推荐内容。
|
||||
|
||||
## 如何生成 User-Item 的分数。
|
||||
> 如何基于用户画像和项目的评分,评论,点赞等数据。生成 User-Item 的分数
|
||||
|
||||
要根据用户与内容(如视频)的交互数据(评分、评论、点赞等)生成**User-Item评分**,可以采用多种方法和策略。下面将详细介绍如何利用这些数据构建有效的用户-物品评分体系,以用于推荐系统。
|
||||
|
||||
### 一、理解用户交互数据类型
|
||||
|
||||
在构建User-Item评分之前,需要明确不同类型的用户交互数据及其含义:
|
||||
|
||||
1. **显性反馈(Explicit Feedback)**:
|
||||
- **评分**:用户明确给出的评分,通常是1-5星,直接反映用户对内容的喜好程度。
|
||||
|
||||
2. **隐性反馈(Implicit Feedback)**:
|
||||
- **点赞/点踩**:用户对内容的简单喜欢或不喜欢操作。
|
||||
- **评论**:用户对内容发表的文字评价,可能包含情感和观点。
|
||||
- **观看时长**:用户观看视频的时长,占比越高,可能表示兴趣越大。
|
||||
- **分享/收藏**:用户将内容分享给他人或收藏,表示高度认可。
|
||||
|
||||
### 二、数据预处理与特征工程
|
||||
|
||||
在利用上述数据生成评分之前,需要进行数据预处理和特征工程:
|
||||
|
||||
1. **数据清洗**:
|
||||
- 去除异常值和噪声数据,如异常高或低的评分、重复记录等。
|
||||
- 处理缺失值,对于缺失的数据可以采用平均值填充或其他方法。
|
||||
|
||||
2. **数据标准化**:
|
||||
- 将不同尺度的数据进行标准化处理,例如将观看时长、点赞数等转换为0-1之间的数值。
|
||||
|
||||
3. **情感分析**(针对评论):
|
||||
- 对评论文本进行情感分析,判断评论是正面的、负面的还是中性的。
|
||||
- 可以采用自然语言处理(NLP)技术,如情感词典、机器学习模型等。
|
||||
|
||||
4. **特征权重设定**:
|
||||
- 根据业务需求和数据重要性,为不同的交互行为设定不同的权重。
|
||||
- 例如,评分的权重可能高于点赞,分享的权重可能高于普通观看。
|
||||
|
||||
### 三、构建User-Item评分
|
||||
|
||||
#### 方法一:加权求和法
|
||||
|
||||
**步骤:**
|
||||
1. **定义各项交互的权重**:
|
||||
- 如:评分(0.5),点赞(0.2),评论情感(0.2),观看时长比例(0.1)。
|
||||
|
||||
2. **计算各项得分**:
|
||||
- **评分得分**:直接采用用户给出的评分,标准化到0-1之间。
|
||||
- **点赞得分**:1表示点赞,0表示未点赞。
|
||||
- **评论得分**:情感分析结果,正面为1,负面为0,中性为0.5。
|
||||
- **观看时长得分**:实际观看时长/视频总时长。
|
||||
|
||||
3. **计算综合评分**:
|
||||
\[
|
||||
综合评分 = 评分得分 \times 0.5 + 点赞得分 \times 0.2 + 评论得分 \times 0.2 + 观看时长得分 \times 0.1
|
||||
\]
|
||||
|
||||
**优点:**
|
||||
- 简单直观,易于实现和解释。
|
||||
|
||||
**缺点:**
|
||||
- 权重设定较为主观,需要根据实际效果不断调整。
|
||||
|
||||
#### 方法二:机器学习模型
|
||||
|
||||
**1. 基于协同过滤(Collaborative Filtering)**
|
||||
|
||||
- **用户协同过滤**:基于相似用户的偏好进行推荐。
|
||||
- **步骤**:
|
||||
- 计算用户之间的相似度(如皮尔逊相关系数、余弦相似度)。
|
||||
- 根据相似用户的评分预测目标用户对未评分项目的兴趣。
|
||||
- **优点**:能够发现潜在的兴趣关联。
|
||||
- **缺点**:冷启动问题,对新用户和新项目效果较差。
|
||||
|
||||
- **物品协同过滤**:基于相似物品的受欢迎程度进行推荐。
|
||||
- **步骤**:
|
||||
- 计算物品之间的相似度。
|
||||
- 根据用户对相似物品的评分预测其对目标物品的评分。
|
||||
|
||||
**2. 基于矩阵分解(Matrix Factorization)**
|
||||
|
||||
- **原理**:
|
||||
- 将用户-物品交互矩阵分解为低维度的用户和物品隐向量,预测缺失的评分。
|
||||
- **输入数据**:
|
||||
- 用户的显性评分数据,隐性反馈可以作为辅助信息。
|
||||
- **常用算法**:
|
||||
- Singular Value Decomposition (SVD)
|
||||
- Non-negative Matrix Factorization (NMF)
|
||||
- **优点**:能够捕捉到潜在的特征关联,预测效果较好。
|
||||
- **缺点**:对数据稀疏性敏感,训练复杂度较高。
|
||||
|
||||
**3. 基于深度学习**
|
||||
|
||||
- **神经网络模型**:
|
||||
- **多层感知器(MLP)**:将用户和物品的特征输入神经网络,学习复杂的非线性关系。
|
||||
- **AutoEncoder**:用于降维和特征提取,重构用户偏好。
|
||||
- **卷积神经网络(CNN)/循环神经网络(RNN)**:处理序列和文本数据,如评论文本的情感分析。
|
||||
- **融合多种特征**:
|
||||
- 将显性和隐性反馈,以及内容特征(如视频的元数据)一起输入模型。
|
||||
- **优点**:能够处理复杂的高维数据,捕捉非线性关系,预测准确度高。
|
||||
- **缺点**:需要大量数据和计算资源,模型训练和调参复杂。
|
||||
|
||||
**4. 基于梯度提升树(Gradient Boosting Trees)**
|
||||
|
||||
- **常用算法**:
|
||||
- XGBoost、LightGBM、CatBoost等。
|
||||
- **步骤**:
|
||||
- 将用户和物品的各种特征作为输入,训练模型预测评分。
|
||||
- **优点**:处理缺失值和类别型特征效果好,训练速度快,性能优异。
|
||||
- **缺点**:对于稀疏数据和高维度数据可能表现不佳。
|
||||
|
||||
### 四、综合考虑与模型选择
|
||||
|
||||
**1. 冷启动问题处理**
|
||||
|
||||
- **新用户**:利用人口统计学信息和初始交互(如注册时的兴趣选择)进行推荐。
|
||||
- **新物品**:利用物品的内容特征(如视频的标签、描述)进行推荐。
|
||||
|
||||
**2. 模型融合**
|
||||
|
||||
- 结合多种模型的优势,采用**混合推荐系统(Hybrid Recommender System)**。
|
||||
- **策略**:
|
||||
- **加权融合**:对不同模型的预测结果加权平均。
|
||||
- **级联融合**:一个模型的输出作为另一个模型的输入。
|
||||
- **元学习**:训练一个模型来学习如何组合其他模型的输出。
|
||||
|
||||
**3. 评价指标**
|
||||
|
||||
- 在模型训练和选择过程中,需要使用适当的评价指标评估模型性能:
|
||||
- **RMSE(均方根误差)**:衡量预测评分与真实评分的差异。
|
||||
- **MAE(平均绝对误差)**:类似于RMSE,但对异常值不敏感。
|
||||
- **Precision@K、Recall@K**:衡量前K个推荐的准确性和召回率。
|
||||
- **MAP(平均准确率均值)**:综合评价推荐列表的整体质量。
|
||||
|
||||
**4. 在线与离线实验**
|
||||
|
||||
- **离线实验**:使用历史数据进行模型训练和评估。
|
||||
- **在线实验(A/B测试)**:在真实环境中测试模型效果,观察用户行为变化。
|
||||
|
||||
### 五、实施步骤总结
|
||||
|
||||
1. **数据收集与预处理**:收集用户与内容的各种交互数据,进行清洗和标准化处理。
|
||||
2. **特征提取与构建**:从原始数据中提取有用的特征,包括数值型和类别型特征。
|
||||
3. **模型选择与训练**:根据数据特点和业务需求选择合适的模型,进行训练和调优。
|
||||
4. **模型评估与优化**:使用适当的评价指标评估模型性能,持续优化。
|
||||
5. **部署与监控**:将模型部署到生产环境中,监控其性能和效果,及时更新。
|
||||
|
||||
### 六、注意事项
|
||||
|
||||
- **数据隐私与安全**:在收集和使用用户数据时,必须遵守相关的隐私政策和法规,保护用户隐私。
|
||||
- **模型公平性与偏见**:确保模型不会对某些群体产生偏见,保持推荐结果的公平性。
|
||||
- **可解释性**:在某些场景下,需要提供推荐结果的解释,提升用户信任度。
|
||||
- **性能与可扩展性**:确保模型在大规模数据和高并发请求下能够高效运行。
|
||||
|
||||
---
|
||||
|
||||
通过综合利用用户的各种交互数据,并采用适当的模型和方法,可以有效地生成精准的User-Item评分,从而提升推荐系统的性能和用户满意度。
|
||||
|
||||
如果您有更多具体的问题或需要深入了解某个部分,欢迎继续提问!
|
||||
BIN
docs/static/计算方式的缺点.jpg
vendored
Normal file
BIN
docs/static/计算方式的缺点.jpg
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 260 KiB |
@@ -6,17 +6,25 @@ import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* @author tarzan
|
||||
*/
|
||||
@SpringBootApplication
|
||||
public class RecommendSystemApplication {
|
||||
|
||||
public static void main(String[] args) {
|
||||
//SpringApplication.run(RecommendSystemApplication.class, args);
|
||||
System.out.println("------基于用户协同过滤推荐---------------下列电影");
|
||||
List<ItemDTO> itemList= Recommend.userCfRecommend(2);
|
||||
List<ItemDTO> itemList= Recommend.userCfRecommend(1);
|
||||
itemList.forEach(e-> System.out.println(e.getName()));
|
||||
System.out.println("------基于物品协同过滤推荐---------------下列电影");
|
||||
List<ItemDTO> itemList1= Recommend.itemCfRecommend(2);
|
||||
List<ItemDTO> itemList1= Recommend.itemCfRecommend(1);
|
||||
itemList1.forEach(e-> System.out.println(e.getName()));
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -33,8 +33,8 @@ public class CoreMath {
|
||||
//关系系数
|
||||
double coefficient = relateDist(v,userItems,type);
|
||||
//关系距离
|
||||
double distance=Math.abs(coefficient);
|
||||
distMap.put(k,distance);
|
||||
// double distance=Math.abs(coefficient);
|
||||
distMap.put(k,coefficient);
|
||||
}
|
||||
});
|
||||
return distMap;
|
||||
@@ -50,8 +50,8 @@ public class CoreMath {
|
||||
* @return double
|
||||
*/
|
||||
private static double relateDist(List<RelateDTO> xList, List<RelateDTO> yList,int type) {
|
||||
List<Integer> xs= Lists.newArrayList();
|
||||
List<Integer> ys= Lists.newArrayList();
|
||||
List<Double> xs= Lists.newArrayList();
|
||||
List<Double> ys= Lists.newArrayList();
|
||||
xList.forEach(x->{
|
||||
yList.forEach(y->{
|
||||
if(type==0){
|
||||
@@ -79,7 +79,7 @@ public class CoreMath {
|
||||
* @author tarzan
|
||||
* @date 2020年07月31日 17:03:20
|
||||
*/
|
||||
public static double getRelate(List<Integer> xs, List<Integer> ys){
|
||||
public static double getRelate(List<Double> xs, List<Double> ys){
|
||||
int n=xs.size();
|
||||
//至少有两个元素
|
||||
if (n<2) {
|
||||
|
||||
@@ -19,18 +19,18 @@ public class ItemCF {
|
||||
* 方法描述: 推荐电影id列表
|
||||
*
|
||||
* @param itemId 当前电影id
|
||||
* @param list 用户电影评分数据
|
||||
* @param list 用户电影评分数据
|
||||
* @return {@link List<Integer>}
|
||||
* @date 2023年02月02日 14:51:42
|
||||
*/
|
||||
public static List<Integer> recommend(Integer itemId, List<RelateDTO> list) {
|
||||
//按物品分组
|
||||
Map<Integer, List<RelateDTO>> itemMap=list.stream().collect(Collectors.groupingBy(RelateDTO::getItemId));
|
||||
Map<Integer, List<RelateDTO>> itemMap = list.stream().collect(Collectors.groupingBy(RelateDTO::getItemId));
|
||||
//获取其他物品与当前物品的关系值
|
||||
Map<Integer,Double> itemDisMap = CoreMath.computeNeighbor(itemId, itemMap,1);
|
||||
Map<Integer, Double> itemDisMap = CoreMath.computeNeighbor(itemId, itemMap, 1);
|
||||
//获取关系最近物品
|
||||
double maxValue=Collections.max(itemDisMap.values());
|
||||
return itemDisMap.entrySet().stream().filter(e->e.getValue()==maxValue).map(Map.Entry::getKey).collect(Collectors.toList());
|
||||
double maxValue = Collections.max(itemDisMap.values());
|
||||
return itemDisMap.entrySet().stream().filter(e -> e.getValue() == maxValue).map(Map.Entry::getKey).collect(Collectors.toList());
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -19,27 +19,27 @@ public class UserCF {
|
||||
* 方法描述: 推荐电影id列表
|
||||
*
|
||||
* @param userId 当前用户
|
||||
* @param list 用户电影评分数据
|
||||
* @param list 用户电影评分数据
|
||||
* @return {@link List<Integer>}
|
||||
* @date 2023年02月02日 14:51:42
|
||||
*/
|
||||
public static List<Integer> recommend(Integer userId, List<RelateDTO> list) {
|
||||
//按用户分组
|
||||
Map<Integer, List<RelateDTO>> userMap=list.stream().collect(Collectors.groupingBy(RelateDTO::getUseId));
|
||||
Map<Integer, List<RelateDTO>> userMap = list.stream().collect(Collectors.groupingBy(RelateDTO::getUseId));
|
||||
//获取其他用户与当前用户的关系值
|
||||
Map<Integer,Double> userDisMap = CoreMath.computeNeighbor(userId, userMap,0);
|
||||
Map<Integer, Double> userDisMap = CoreMath.computeNeighbor(userId, userMap, 0);
|
||||
//获取关系最近的用户
|
||||
double maxValue=Collections.max(userDisMap.values());
|
||||
Set<Integer> userIds=userDisMap.entrySet().stream().filter(e->e.getValue()==maxValue).map(Map.Entry::getKey).collect(Collectors.toSet());
|
||||
double maxValue = Collections.max(userDisMap.values());
|
||||
Set<Integer> userIds = userDisMap.entrySet().stream().filter(e -> e.getValue() == maxValue).map(Map.Entry::getKey).collect(Collectors.toSet());
|
||||
//取关系最近的用户
|
||||
Integer nearestUserId = userIds.stream().findAny().orElse(null);
|
||||
if(nearestUserId==null){
|
||||
if (nearestUserId == null) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
//最近邻用户看过电影列表
|
||||
List<Integer> neighborItems = userMap.get(nearestUserId).stream().map(RelateDTO::getItemId).collect(Collectors.toList());
|
||||
List<Integer> neighborItems = userMap.get(nearestUserId).stream().map(RelateDTO::getItemId).collect(Collectors.toList());
|
||||
//指定用户看过电影列表
|
||||
List<Integer> userItems = userMap.get(userId).stream().map(RelateDTO::getItemId).collect(Collectors.toList());
|
||||
List<Integer> userItems = userMap.get(userId).stream().map(RelateDTO::getItemId).collect(Collectors.toList());
|
||||
//找到最近邻看过,但是该用户没看过的电影
|
||||
neighborItems.removeAll(userItems);
|
||||
return neighborItems;
|
||||
|
||||
@@ -21,7 +21,7 @@ public class RelateDTO {
|
||||
/** 物品id */
|
||||
private Integer itemId;
|
||||
/** 指数 */
|
||||
private Integer index;
|
||||
private Double index;
|
||||
|
||||
|
||||
}
|
||||
|
||||
@@ -20,7 +20,11 @@ import java.util.Objects;
|
||||
public class FileDataSource {
|
||||
|
||||
|
||||
public static String folderPath;
|
||||
public static String folderPath;
|
||||
|
||||
static {
|
||||
folderPath = Objects.requireNonNull(FileDataSource.class.getResource("/ml-100k")).getPath();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
@@ -31,19 +35,19 @@ public class FileDataSource {
|
||||
* @date 2020年07月31日 16:53:40
|
||||
*/
|
||||
public static List<RelateDTO> getData() {
|
||||
folderPath= Objects.requireNonNull(FileDataSource.class.getResource("/ml-100k")).getPath();
|
||||
List<RelateDTO> relateList = Lists.newArrayList();
|
||||
try {
|
||||
FileInputStream out = new FileInputStream(folderPath+"\\u.data");
|
||||
FileInputStream out = new FileInputStream(folderPath + File.separator + "u.data");
|
||||
InputStreamReader reader = new InputStreamReader(out, StandardCharsets.UTF_8);
|
||||
BufferedReader in = new BufferedReader(reader);
|
||||
String line;
|
||||
while ((line = in.readLine()) != null) {
|
||||
String newline = line.replaceAll("[\t]", " ");
|
||||
// 196 242 3 881250949
|
||||
String[] ht = newline.split(" ");
|
||||
Integer userId = Integer.parseInt(ht[0]);
|
||||
Integer movieId = Integer.parseInt(ht[1]);
|
||||
Integer rating = Integer.parseInt(ht[2]);
|
||||
Double rating = Double.parseDouble(ht[2]);
|
||||
RelateDTO dto = new RelateDTO(userId, movieId, rating);
|
||||
relateList.add(dto);
|
||||
}
|
||||
@@ -61,10 +65,9 @@ public class FileDataSource {
|
||||
* @date 2020年07月31日 16:54:51
|
||||
*/
|
||||
public static List<UserDTO> getUserData() {
|
||||
folderPath= Objects.requireNonNull(FileDataSource.class.getResource("/ml-100k")).getPath();
|
||||
List<UserDTO> userList = Lists.newArrayList();
|
||||
try {
|
||||
FileInputStream out = new FileInputStream(folderPath+"\\u.user");
|
||||
FileInputStream out = new FileInputStream(folderPath + File.separator + "u.user");
|
||||
InputStreamReader reader = new InputStreamReader(out, StandardCharsets.UTF_8);
|
||||
BufferedReader in = new BufferedReader(reader);
|
||||
String line;
|
||||
@@ -94,10 +97,9 @@ public class FileDataSource {
|
||||
* @date 2020年07月31日 16:54:22
|
||||
*/
|
||||
public static List<ItemDTO> getItemData() {
|
||||
folderPath= Objects.requireNonNull(FileDataSource.class.getResource("/ml-100k")).getPath();
|
||||
List<ItemDTO> itemList = Lists.newArrayList();
|
||||
try {
|
||||
FileInputStream out = new FileInputStream(folderPath+"\\u.item");
|
||||
FileInputStream out = new FileInputStream(folderPath + File.separator + "u.item");
|
||||
InputStreamReader reader = new InputStreamReader(out, StandardCharsets.UTF_8);
|
||||
BufferedReader in = new BufferedReader(reader);
|
||||
String line;
|
||||
|
||||
@@ -29,6 +29,7 @@ public class Recommend{
|
||||
*/
|
||||
public static List<ItemDTO> userCfRecommend(int userId){
|
||||
List<RelateDTO> data= FileDataSource.getData();
|
||||
// System.out.println(data);
|
||||
List<Integer> recommendations = UserCF.recommend(userId, data);
|
||||
return FileDataSource.getItemData().stream().filter(e->recommendations.contains(e.getId())).collect(Collectors.toList());
|
||||
}
|
||||
|
||||
156
src/main/resources/ml-100k/readme.txt
Normal file
156
src/main/resources/ml-100k/readme.txt
Normal file
@@ -0,0 +1,156 @@
|
||||
SUMMARY & USAGE LICENSE
|
||||
=============================================
|
||||
|
||||
MovieLens data sets were collected by the GroupLens Research Project
|
||||
at the University of Minnesota.
|
||||
|
||||
This data set consists of:
|
||||
* 100,000 ratings (1-5) from 943 users on 1682 movies.
|
||||
* Each user has rated at least 20 movies.
|
||||
* Simple demographic info for the users (age, gender, occupation, zip)
|
||||
|
||||
The data was collected through the MovieLens web site
|
||||
(movielens.umn.edu) during the seven-month period from September 19th,
|
||||
1997 through April 22nd, 1998. This data has been cleaned up - users
|
||||
who had less than 20 ratings or did not have complete demographic
|
||||
information were removed from this data set. Detailed descriptions of
|
||||
the data file can be found at the end of this file.
|
||||
|
||||
Neither the University of Minnesota nor any of the researchers
|
||||
involved can guarantee the correctness of the data, its suitability
|
||||
for any particular purpose, or the validity of results based on the
|
||||
use of the data set. The data set may be used for any research
|
||||
purposes under the following conditions:
|
||||
|
||||
* The user may not state or imply any endorsement from the
|
||||
University of Minnesota or the GroupLens Research Group.
|
||||
|
||||
* The user must acknowledge the use of the data set in
|
||||
publications resulting from the use of the data set
|
||||
(see below for citation information).
|
||||
|
||||
* The user may not redistribute the data without separate
|
||||
permission.
|
||||
|
||||
* The user may not use this information for any commercial or
|
||||
revenue-bearing purposes without first obtaining permission
|
||||
from a faculty member of the GroupLens Research Project at the
|
||||
University of Minnesota.
|
||||
|
||||
If you have any further questions or comments, please contact GroupLens
|
||||
<grouplens-info@cs.umn.edu>.
|
||||
|
||||
CITATION
|
||||
==============================================
|
||||
|
||||
To acknowledge use of the dataset in publications, please cite the
|
||||
following paper:
|
||||
|
||||
F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets:
|
||||
History and Context. ACM Transactions on Interactive Intelligent
|
||||
Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages.
|
||||
DOI=http://dx.doi.org/10.1145/2827872
|
||||
|
||||
ACKNOWLEDGEMENTS
|
||||
==============================================
|
||||
|
||||
Thanks to Al Borchers for cleaning up this data and writing the
|
||||
accompanying scripts.
|
||||
|
||||
PUBLISHED WORK THAT HAS USED THIS DATASET
|
||||
==============================================
|
||||
|
||||
Herlocker, J., Konstan, J., Borchers, A., Riedl, J.. An Algorithmic
|
||||
Framework for Performing Collaborative Filtering. Proceedings of the
|
||||
1999 Conference on Research and Development in Information
|
||||
Retrieval. Aug. 1999.
|
||||
|
||||
FURTHER INFORMATION ABOUT THE GROUPLENS RESEARCH PROJECT
|
||||
==============================================
|
||||
|
||||
The GroupLens Research Project is a research group in the Department
|
||||
of Computer Science and Engineering at the University of Minnesota.
|
||||
Members of the GroupLens Research Project are involved in many
|
||||
research projects related to the fields of information filtering,
|
||||
collaborative filtering, and recommender systems. The project is lead
|
||||
by professors John Riedl and Joseph Konstan. The project began to
|
||||
explore automated collaborative filtering in 1992, but is most well
|
||||
known for its world wide trial of an automated collaborative filtering
|
||||
system for Usenet news in 1996. The technology developed in the
|
||||
Usenet trial formed the base for the formation of Net Perceptions,
|
||||
Inc., which was founded by members of GroupLens Research. Since then
|
||||
the project has expanded its scope to research overall information
|
||||
filtering solutions, integrating in content-based methods as well as
|
||||
improving current collaborative filtering technology.
|
||||
|
||||
Further information on the GroupLens Research project, including
|
||||
research publications, can be found at the following web site:
|
||||
|
||||
http://www.grouplens.org/
|
||||
|
||||
GroupLens Research currently operates a movie recommender based on
|
||||
collaborative filtering:
|
||||
|
||||
http://www.movielens.org/
|
||||
|
||||
DETAILED DESCRIPTIONS OF DATA FILES
|
||||
==============================================
|
||||
|
||||
Here are brief descriptions of the data.
|
||||
|
||||
ml-data.tar.gz -- Compressed tar file. To rebuild the u data files do this:
|
||||
gunzip ml-data.tar.gz
|
||||
tar xvf ml-data.tar
|
||||
mku.sh
|
||||
|
||||
u.data -- The full u data set, 100000 ratings by 943 users on 1682 items.
|
||||
Each user has rated at least 20 movies. Users and items are
|
||||
numbered consecutively from 1. The data is randomly
|
||||
ordered. This is a tab separated list of
|
||||
user id | item id | rating | timestamp.
|
||||
The time stamps are unix seconds since 1/1/1970 UTC
|
||||
|
||||
u.info -- The number of users, items, and ratings in the u data set.
|
||||
|
||||
u.item -- Information about the items (movies); this is a tab separated
|
||||
list of
|
||||
movie id | movie title | release date | video release date |
|
||||
IMDb URL | unknown | Action | Adventure | Animation |
|
||||
Children's | Comedy | Crime | Documentary | Drama | Fantasy |
|
||||
Film-Noir | Horror | Musical | Mystery | Romance | Sci-Fi |
|
||||
Thriller | War | Western |
|
||||
The last 19 fields are the genres, a 1 indicates the movie
|
||||
is of that genre, a 0 indicates it is not; movies can be in
|
||||
several genres at once.
|
||||
The movie ids are the ones used in the u.data data set.
|
||||
|
||||
u.genre -- A list of the genres.
|
||||
|
||||
u.user -- Demographic information about the users; this is a tab
|
||||
separated list of
|
||||
user id | age | gender | occupation | zip code
|
||||
The user ids are the ones used in the u.data data set.
|
||||
|
||||
u.occupation -- A list of the occupations.
|
||||
|
||||
u1.base -- The data sets u1.base and u1.test through u5.base and u5.test
|
||||
u1.test are 80%/20% splits of the u data into training and test data.
|
||||
u2.base Each of u1, ..., u5 have disjoint test sets; this if for
|
||||
u2.test 5 fold cross validation (where you repeat your experiment
|
||||
u3.base with each training and test set and average the results).
|
||||
u3.test These data sets can be generated from u.data by mku.sh.
|
||||
u4.base
|
||||
u4.test
|
||||
u5.base
|
||||
u5.test
|
||||
|
||||
ua.base -- The data sets ua.base, ua.test, ub.base, and ub.test
|
||||
ua.test split the u data into a training set and a test set with
|
||||
ub.base exactly 10 ratings per user in the test set. The sets
|
||||
ub.test ua.test and ub.test are disjoint. These data sets can
|
||||
be generated from u.data by mku.sh.
|
||||
|
||||
allbut.pl -- The script that generates training and test sets where
|
||||
all but n of a users ratings are in the training data.
|
||||
|
||||
mku.sh -- A shell script to generate all the u data sets from u.data.
|
||||
10
src/main/resources/ml-100k/u.data-test
Normal file
10
src/main/resources/ml-100k/u.data-test
Normal file
@@ -0,0 +1,10 @@
|
||||
1 1 1 847117005
|
||||
1 2 10 847642142
|
||||
1 3 1 847641896
|
||||
2 1 2 847642008
|
||||
2 3 2 847641956
|
||||
3 1 3 847641956
|
||||
3 3 3 847642073
|
||||
4 1 4 847642105
|
||||
4 2 1 847116751
|
||||
4 3 5 847116787
|
||||
Reference in New Issue
Block a user