Recommender System WITH PyTorch ๐ข๐ ๐ด
PyTorch Implementation of classic Recommender System Models mainly used for self-learing&communication .
checkout for tensorflow branch
corresponding papers ๐ RS_Papers ๐
Model
dataset
loss_func
metrics
state
LFM
ml-100k
MSELoss
MSE: 0.9031
๐ข
BiasSVD
ml-100k
MSELoss
MSE: 0.8605
๐ข
SVD++
ml-100k
MSELoss
MSE: 0.8493
๐ข
Model
dataset
loss_func
metrics
state
FM
criteo
BCELoss
AUC: 0.6934
๐ข
FFM
criteo
BCELoss
AUC: 0.6729
๐ข
Model
dataset
loss_func
metrics
state
FPMC
ml-100k
sBPRLoss
Recall@10: 0.0622
๐ข
SASRec
ml-100k
BCEWithLogitsLoss
NDCG@10: 0.1801
HR@10: 0.3595
๐ข
Model
dataset
loss_func
metrics
state
RippleNet
ml-1m
BCELoss
AUC: 0.8838
๐ข
Model
dataset
loss_func
metrics
state
NeuralCF
ml-100k
MSELoss
MSE: 0.3322
๐ข
Model
dataset
loss_func
metrics
state
FNN
criteo
BCELoss
AUC: 0.6787
๐ข
DeepFM
criteo
BCELoss
AUC: 0.6854
๐ข
NFM
criteo
BCELoss
AUC: 0.6705
๐ข
AFM
criteo
BCELoss
AUC: 0.6572
๐ข
Model
dataset
loss_func
metrics
state
Deep Crossing
criteo
BCELoss
AUC: 0.7210
๐ข
PNN
criteo
BCELoss
AUC: 0.6360
๐ข
Wide&Deep
criteo
BCELoss
AUC: 0.7074
๐ข
DCN
criteo
BCELoss
AUC: 0.7335
๐ข
DIN
amazon book
BCELoss
AUC: 0.5988
๐ข
DIN: It seems that the feature engineering(negative sampling) of paper used for amazon book
seems bad. I try hard but the auc of test cannot reach the 0.811
on amazon book
.
Model
dataset
loss_func
metrics
state
MMOE
census-income
BCEWithLogitsLoss
income-AUC: 0.9061
marry-AUC: 0.9637
๐ข
ESMM
census-income
BCEWithLogitsLoss
income-ctr-AUC: 0.9242
ctcvr-AUC: 0.9122
๐ข