Comments (4)
Sorry that for ipex-llm==2.5.0b20240421 && bigdl-core-xe-21 == 2.5.0b20240421, I can't reproduce this this issue on both arc and mtl(at least load_low_bit works fine)
(sgwhat-llm) D:\jinqiao\ipex-llm\python\llm\example\GPU\HF-Transformers-AutoModels\Model\qwen>python generate.py
bin C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so
C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
function 'cadam32bit_grad_fp32' not found
C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'C:\Users\arda\miniconda3\envs\sgwhat-llm\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
2024-04-22 13:48:15,416 - INFO - intel_extension_for_pytorch auto imported
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 9.34it/s]
2024-04-22 13:48:16,649 - INFO - Converting the current model to sym_int4 format......
-------------------- Prompt --------------------
<|im_start|>system
You are a helpful assistant.
<|im_end|>
<|im_start|>user
AI是什么?
<|im_end|>
<|im_start|>assistant
-------------------- Output --------------------
system
You are a helpful assistant.
user
AI是什么?
assistant
AI(人工智能)是指由计算机系统模拟、延伸和扩展人类智能的一门技术。它的目标是使计算机系统具有学习能力、推理能力和决策
from bigdl.
With passing fast_init=False
, the model could be normally loaded.
model = AutoModelForCausalLM.load_low_bit(model_path, _fast_init=False, trust_remote_code=True, optimize_model=True).eval()
from bigdl.
With passing
fast_init=False
, the model could be normally loaded.model = AutoModelForCausalLM.load_low_bit(model_path, _fast_init=False, trust_remote_code=True, optimize_model=True).eval()
Shall we update our save/load example to explicitly add this parameter?
from bigdl.
Shall we update our save/load example to explicitly add this parameter?
Yes and I think we can do this in our load_low_bit
API(so that our users will not be aware of the sight change and all examples will be ). For all saved low bit weights are already processed and need no initialized, this will not effect our function now.
from bigdl.
Related Issues (20)
- ipex-llm version 0510 has regression than 0430, especially for BS=16,32 and 8k input HOT 2
- Failing to run ipex-llm ollama on Intel Arc A770 HOT 12
- Can you help to release common.lib for llama.cpp with ipex-llm? HOT 1
- llama3-8B causes MTL iGPU runtime error when ipex-llm's running AI inference HOT 3
- Segmentation fault (core dumped) while inferencing with MTL iGPU HOT 4
- Support both Llama2 and stablelm/Zephyr-3B HOT 2
- all-in-one benchmark with Baichuan2-13B OOM HOT 1
- MTL Windows Qwen-VL AttributeError: 'QWenAttention' object has no attribute 'position_ids' HOT 3
- ChatGLM run error on MTL iGPU HOT 1
- failed to run truthfulqa_mc1 by harness HOT 1
- how to switch to load multiple llm models in a streamlit page? HOT 3
- Transform a string into input llama2-specific and llama3-specific input ? HOT 1
- Docker on Windows vllm serving issue HOT 15
- default values of max_generated_tokens, top_k, top_p, and temperature? HOT 1
- log using ipex-llm instead of bigdl-llm in while running native models
- Weights of LlamaForCausalLM were not initialized from the model checkpoint at meta-llama/Meta-Llama-3-8B-Instruct? HOT 1
- vLLM offline_inference.py failed to run on CPU inference HOT 1
- Unable to save quantized model HOT 1
- Llama 3 performance drop from transformers version 4.37.2 to 4.38.0
- about conflict HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bigdl.