Comments (3)
@reddiamond1234 we don't support the windows platform now.
from llama-adapter.
This problem isn't limited to Windows. On Ubuntu w/ a CUDA 12 driver:
(llama_adapter) dev@desktop:~/projects/LLaMA-Adapter$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
(llama_adapter) dev@desktop:~/projects/LLaMA-Adapter$ torchrun --nproc_per_node 1 example.py \
--ckpt_dir $TARGET_FOLDER/model_size\
--tokenizer_path $TARGET_FOLDER/tokenizer.model \
--adapter_path $ADAPTER_PATH
Traceback (most recent call last):
File "example.py", line 114, in <module>
fire.Fire(main)
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "example.py", line 88, in main
local_rank, world_size = setup_model_parallel()
File "example.py", line 35, in setup_model_parallel
torch.distributed.init_process_group("nccl")
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 907, in init_process_group
default_pg = _new_process_group_helper(
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1013, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have NCCL " "built in")
RuntimeError: Distributed package doesn't have NCCL built in
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2214009) of binary: /home/dev/miniconda/envs/llama_adapter/bin/python
Traceback (most recent call last):
File "/home/dev/miniconda/envs/llama_adapter/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.0.1', 'console_scripts', 'torchrun')())
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/dev/miniconda/envs/llama_adapter/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
example.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-05-12_11:56:32
host : desktop
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2214009)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
from llama-adapter.
In case it helps others, I worked around this problem by:
- Recreating the llama_adapter conda env
- first installing the appropriate torch build for my machine (install command picked from here)
- Remove the torch entry from requirements.txt
- pip install -r requirements.txt
from llama-adapter.
Related Issues (20)
- Unable to produce the result between LLaMA-Adapter V1 and Alpaca HOT 1
- question about Pretrained LLAMA applicable to Llama_adapter model. thanks HOT 1
- I don't know which data to use to reproduce the model llama-adapter-multimodal-v2.
- Does storage space in the paper mean the capacity of checkpoint file? HOT 2
- Inquiry on Loading LLaMa-2 Model Parameters HOT 1
- how to set llama adapter max_seq_len = 4096
- [LLaMA Adapter V2] Evaluation on multiple choice questions. HOT 1
- AssertionError: Loading a checkpoint for MP=0 but world size is 1 HOT 2
- Don't find save path"ADAPTER_PATH" HOT 1
- Getting error "AF_UNIX path too long"
- Loss is nan, stopping training, while trying to reproduce alpaca_finetuning_v1 results. HOT 1
- Simple question about llama adapter v1 transformer forward function
- imagebind_LLM中的get_chinese_llama.py文件丢失,可以补充一下吗? HOT 1
- Getting weird output for multimodal 7B adapter HOT 3
- Assertation Error start_pos- AdapterV2 Multimodal
- The meaning of C_loss and M_loss HOT 1
- what is the dataset during pretraining llama_adapter_v2_multimodal7b?
- RuntimeError: CUDA out of memory
- RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 90177536 bytes. Error code 12 (Cannot allocate memor y)
- created a model on colab but cannot load for inference
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama-adapter.