Coder Social home page Coder Social logo

convmf's Introduction

Convolutional Matrix Factorization (ConvMF)

Overview

Sparseness of user-to-item rating data is one of the major factors that deteriorate the quality of recommender system. To handle the sparsity problem, several recommendation techniques have been proposed that additionally consider auxiliary information to improve rating prediction accuracy. In particular, when rating data is sparse, document modeling-based approaches have improved the accuracy by additionally utilizing textual data such as reviews, abstracts, or synopses. However, due to the inherent limitation of the bag-of-words model, they have difficulties in effectively utilizing contextual information of the documents, which leads to shallow understanding of the documents. This paper proposes a novel context-aware recommendation model, convolutional matrix factorization (ConvMF) that integrates convolutional neural network (CNN) into probabilistic matrix factorization (PMF). Consequently, ConvMF captures contextual information of documents and further enhances the rating prediction accuracy.

Paper

  • Convolutional Matrix Factorization for Document Context-Aware Recommendation (RecSys 2016)
    • Donghyun Kim, Chanyoung Park, Jinoh Oh, Seungyong Lee, Hwanjo Yu
  • Deep Hybrid Recommender Systems via Exploiting Document Context and Statistics of Items (Information Sciences (SCI))

Requirements

How to Run

Note: Run python <install_path>/run.py -h in bash shell. You will see how to configure several parameters for ConvMF

Configuration

You can evaluate our model with different settings in terms of the size of dimension, the value of hyperparameter, the number of convolutional kernal, and etc. Below is a description of all the configurable parameters and their defaults:

Parameter Default
-h, --help {}
-c <bool>, --do_preprocess <bool> False
-r <path>, --raw_rating_data_path <path> {}
-i <path>, --raw_item_document_data_path <path> {}
-m <integer>, --min_rating <integer> {}
-l <integer>, --max_length_document <integer> 300
-f <float>, --max_df <float> 0.5
-s <integer>, --vocab_size <integer> 8000
-t <float>, --split_ratio <float> 0.2
-d <path>, --data_path <path> {}
-a <path>, --aux_path <path> {}
-o <path>, --res_dir <path> {}
-e <integer>, --emb_dim <integer> 200
-p <path>, --pretrain_w2v <path> {}
-g <bool>, --give_item_weight <bool> True
-k <integer>, --dimension <integer> 50
-u <float>, --lambda_u <float> {}
-v <float>, --lambda_v <float> {}
-n <integer>, --max_iter <integer> 200
-w <integer>, --num_kernel_per_ws 100
  1. do_preprocess: True or False in order to preprocess raw data for ConvMF.
  2. raw_rating_data_path: path to a raw rating data path. The data format should be user id::item id::rating.
  3. min_rating: users who have less than min_rating ratings will be removed.
  4. max_length_document: the maximum length of document of each item.
  5. max_df: threshold to ignore terms that have a document frequency higher than the given value. i.e. for removing corpus-stop words.
  6. vocab_size: size of vocabulary.
  7. split_ratio: 1-ratio, ratio/2 and ratio/2 of the entire dataset will be constructed as training, valid and test set, respectively.
  8. data_path: path to training, valid and test datasets.
  9. aux_path: path to R, D_all sets that are generated during the preprocessing step.
  10. res_dir: path to ConvMF's result
  11. emb_dim: the size of latent dimension for word vectors.
  12. pretrain_w2v: path to pretrained word embedding model to initialize word vectors.
  13. give_item_weight : True or False to give item weight for R-ConvMF.
  14. dimension: the size of latent dimension for users and items.
  15. lambda_u: parameter of user regularizer.
  16. lambda_v: parameter of item regularizer.
  17. max_iter: the maximum number of iteration.
  18. num_kernel_per_ws: the number of kernels per window size for CNN module.

convmf's People

Contributors

cartopy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convmf's Issues

Dataset Download Issue

While trying to download dataset I am getting 403 Forbidden Error "You don't have permission to access /~cartopy/ConvMF/data/ on this server."
It was working last week, can you please fix it?

No 'Plot.idmap' file

When I try to run the 'run_test_preprocess.sh' file and reprocess the data, I find that without 'Plot.idmap' file, I cannot read itemtext

저자님 안녕하세요 질문 있습니다.

안녕하세요.

논문과 코드 관련해서 질문이 있는데요, epsilon variable에 관한 겁니다.

  1. 논문 3페이지에 보면 가우시안 노이즈를 뜻하는 epsilon variable이 CNN features에 더해지던데.... epsilon variable이 어떤 의미인지랑... 왜 사용되야 하는지 궁금합니다!

  2. 코드의 models.py를 보면 epsilon variable을 안쓰신 것 같은데... 맞나요? 맞다면 이유가 궁금합니다. 아니면 제가 못찾고 있는건지 ㅠㅠ

알려주시면 너무 감사하겠습니다!

code error

When I set

do_preprocess = True

Traceback (most recent call last):
File "D:/pycode/ConvMF/run.py", line 97, in
path_rating, path_itemtext, min_rating, max_length, max_df, vocab_size)
File "D:\pycode\ConvMF\data_manager.py", line 372, in preprocess
tmp_plot = tmp[1].split('|')
IndexError: list index out of range

do_preprocess = False

Traceback (most recent call last):
File "D:/pycode/ConvMF/run.py", line 132, in
R, D_all = data_factory.load(aux_path)
File "D:\pycode\ConvMF\data_manager.py", line 42, in load
R = pickle.load(open(path + "ratings.all", "rb"))
File "D:\Anaconda2\lib\pickle.py", line 1384, in load
===================================ConvMF Option Setting===================================
return Unpickler(file).load()
File "D:\Anaconda2\lib\pickle.py", line 864, in load
dispatchkey
File "D:\Anaconda2\lib\pickle.py", line 1096, in load_global
klass = self.find_class(module, name)
File "D:\Anaconda2\lib\pickle.py", line 1130, in find_class
import(module)
ImportError: No module named copy_get
----------------------------------------------------------------------------how could it be?

copy_reg ? Error

Traceback (most recent call last):
===================================ConvMF Option Setting===================================
File "D:/pycode/ConvMF/run.py", line 132, in
R, D_all = data_factory.load(aux_path)
data path - D:/pycode/ConvMF/data/preprocessed/movielens_10m/cf/0.2_1/
File "D:\pycode\ConvMF\data_manager.py", line 31, in load
aux path - D:/pycode/ConvMF/data/preprocessed/movielens_10m/
R = pickle.load(open(path + "ratings.all", "rb")) # , encoding='iso-8859-
ImportError: No module named 'copy_reg\r'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.