Coder Social home page Coder Social logo

e4srec's People

Contributors

hestiasky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

e4srec's Issues

The performance gap compared to P5

Hi! It's great work and thanks for publishing the code.

However, according to the results in your paper, it seems that the performance is much worse than P5 "Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)", especially on Yelp (H@5 0.0266 v.s. 0.0574) and Sports (H@5 0.0281 v.s. 0.0387). I have checked that the statistics of the datasets and the evaluation methods are the same in your paper and P5. So, I am really curious about the reason behind the huge performance gap. Is there any difference in design choices or implementations?

files missed

whe I run preprocess data file, I found there are missed data files
datas = []
# older Amazon
data_flie = '/path/reviews_' + dataset_name + '.json.gz'
# latest Amazon
# data_flie = '/home/hui_wang/data/new_Amazon/' + dataset_name + '.json.gz'

could you provide it?

RuntimeError: expected mat1 and mat2 to have the same dtype, but got: c10::Half != float

Dear authors,
Thanks for your nice work! I have already cloned this repo and downloaded the 'Platypus2-7B' model. However, I meet with the following error when running the 'fine-turning.sh':
Traceback (most recent call last): File "finetune.py", line 245, in <module> fire.Fire(train) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "finetune.py", line 172, in train trainer.train(resume_from_checkpoint=resume_from_checkpoint) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/transformers/trainer.py", line 2777, in training_step self.accelerator.backward(loss) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/accelerate/accelerator.py", line 1851, in backward self.scaler.scale(loss).backward(**kwargs) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/autograd/function.py", line 288, in apply return user_fn(self, *args) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 288, in backward torch.autograd.backward(outputs_with_grad, args_with_grad) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/torch/autograd/function.py", line 288, in apply return user_fn(self, *args) File "/home/anaconda3/envs/platypus/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 480, in backward grad_A = torch.matmul(grad_output, CB).view(ctx.grad_shape).to(ctx.dtype_A) RuntimeError: expected mat1 and mat2 to have the same dtype, but got: c10::Half != float

Runtime error "Cannot set a non-string value as the PAD token"

Dear authors,
Thanks for your nice work! I have already cloned this repo and downloaded the 'Platypus2-70B-instruct' model. However, I encounter (meet) with the following error when running the 'fine-turning.sh':

Traceback (most recent call last):
  File "finetune.py", line 245, in <module>
    fire.Fire(train)
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "finetune.py", line 118, in train
    model = LLM4Rec(
  File "/data/baseline/E4SRec/model.py", line 42, in __init__
    self.llama_tokenizer.pad_token = 0
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1145, in pad_token
    raise ValueError("Cannot set a non-string value as the PAD token")
ValueError: Cannot set a non-string value as the PAD token

It seems like the token cannot be the number.

Acceptance of WWW

Dear authors of E4SRec:
I have seen your paper from https://arxiv.org/pdf/2312.02443.pdf, and it specified that your work has been accepted by WWW'24. I am also an author and wonder if the acceptance result of WWW has been settled, could you please share some information about WWW'24 with me? Which is of vital importance for my work arrangement.

\ ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: =====\

anyone having this issue? I am using single gpu and i tried to decrease the batch size and increase the memory but I haven't fixed it.

/home/user/.local/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================

user embedding

How to get the user embedding from sequential model using Beauty dataset?

数据集问题请教

为什么 用data_process.py 生成的文件,和您给出的数据集文件不太一样。

以Beauty为例,
data_process.py 生成的两个文件:
Beauty_neg.txt、Beauty_item2attributes.json
image
image

而您在 E4SRec/datasets/sequential/Beauty 给出的
Beauty_item2attributes.json Beauty.txt
image
image

为什么?

  1. 生成的Beauty_neg.txt 和 您给出的 Beauty.txt 一模一样,neg不是代表负采样的吗?
  2. 生成的Beauty_item2attributes.json,和您给出的Beauty_item2attributes.json 不一样?

Details of experiments and Reproducibility issue

Hi, your work is very interesting and has been a great source of inspiration to me. As I attempt to replicate the results presented in your paper, I have the following questions:

  1. In your paper, you mention performing one epoch of instruction tuning on LLaMA2-13B, but the instruction dataset used has not been released. However, in your experimental script, the base model is listed as garage-bAInd/Platypus2-70B-instruct, which is confusing. Which base model did you actually use for the experiments reported in your paper? If it was LLaMA2-13B, could you please release the dataset? If the base model is garage-bAInd/Platypus2-70B-instruct, then my results on the beauty dataset are significantly lower than those reported in your paper (See Table 6), as shown in the attached image:
Snipaste_2024-01-12_19-21-37
  1. Regarding dataset processing, you state in your paper that “the maximum sequence length is set to 50 for all models on all datasets.” However, upon reviewing your code, I found that the length of user interaction records is not limited to 50; in fact, some exceed 100 in the training data of the dataset. Could this inconsistency in maximum length lead to unfair comparisons?

Hyperparameters on Yelp

Hi! I am trying to reproduce the results on Yelp.
May I ask what are your hyperparameter settings on Yelp?

ValueError: Target modules [gate_proj, down_proj, up_proj] not found in the base model. Please check the target modules and try again.

I use huggyllama-7b as base model

Target modules [gate_proj, down_proj, up_proj] not found in the base model. Please check the target modules and try again.
File "/usr/local/lib/python3.10/dist-packages/peft/tuners/tuners_utils.py", line 222, in inject_adapter
raise ValueError(
File "/usr/local/lib/python3.10/dist-packages/peft/tuners/tuners_utils.py", line 88, in init
self.inject_adapter(self.model, adapter_name)
File "/usr/local/lib/python3.10/dist-packages/peft/tuners/lora.py", line 274, in init
super().init(model, config, adapter_name)
File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 111, in init
self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](
File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 1658, in init
super().init(model, peft_config, adapter_name)
File "/usr/local/lib/python3.10/dist-packages/peft/mapping.py", line 106, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)
File "/data1/E4SRec/model.py", line 39, in init
self.llama_model = get_peft_model(self.llama_model, peft_config)
File "/data1/E4SRec/finetune.py", line 118, in train
model = LLM4Rec(
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/data1/E4SRec/finetune.py", line 245, in
fire.Fire(train)
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
ValueError: Target modules [gate_proj, down_proj, up_proj] not found in the base model. Please check the target modules and try again.

baseline负样本数量

你好,请问文章中的baseline的负样本是多少啊?E4SRec采用交叉熵损失,负样本数量是所有候选集,baseline是否选取了所有候选集作为负样本?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.