Comments (9)
Hi I have encountered this issue before, could you try to install oneapi 2024.0 and try again? Hope that it helps
from bigdl.
the main issue is the error above
from bigdl.
thank you for your response, I already install 2024.0
from bigdl.
Hi @K-Alex13 , IPEX-LLM currently only supports oneapi 2024.0. You can activate oneapi 2024.0 and try again.
from bigdl.
the problem is still exist, if this is due to I forget to uninstall oneapi 2024.1 just continue install oneapi 2024.0?
from bigdl.
Hi I would suggest you to source 2024.0. try cd to /opt/intel and see the versions
from bigdl.
the problem is still exist, if this is due to I forget to uninstall oneapi 2024.1 just continue install oneapi 2024.0?
Hi K-Alex13, have you try source
the 2024.0 version of oneapi?
If it does not work, would you mind sending the output of ldd /path/to/libtorch_cpu.so
(copy the path form the last line of you the error message you paste.)?
from bigdl.
from bigdl.
This is the new error, not sure what happened.
I use oneapi 2024.0 with following package
713 pip install torch-2.1.0a0+cxx11.abi-cp39-cp39-linux_x86_64.whl
714 pip install torchvision-0.16.0a0+cxx11.abi-cp39-cp39-linux_x86_64.whl
715 pip install intel_extension_for_pytorch-2.1.10+xpu-cp39-cp39-linux_x86_64.whl
716 pip install --pre --upgrade bigdl-llm[xpu]
from bigdl.
Related Issues (20)
- Failing to run ipex-llm ollama on Intel Arc A770 HOT 12
- Can you help to release common.lib for llama.cpp with ipex-llm? HOT 1
- llama3-8B causes MTL iGPU runtime error when ipex-llm's running AI inference HOT 3
- Segmentation fault (core dumped) while inferencing with MTL iGPU HOT 4
- Support both Llama2 and stablelm/Zephyr-3B HOT 2
- all-in-one benchmark with Baichuan2-13B OOM HOT 1
- MTL Windows Qwen-VL AttributeError: 'QWenAttention' object has no attribute 'position_ids' HOT 3
- ChatGLM run error on MTL iGPU HOT 1
- failed to run truthfulqa_mc1 by harness HOT 1
- how to switch to load multiple llm models in a streamlit page? HOT 3
- Transform a string into input llama2-specific and llama3-specific input ? HOT 1
- Docker on Windows vllm serving issue HOT 15
- default values of max_generated_tokens, top_k, top_p, and temperature? HOT 1
- log using ipex-llm instead of bigdl-llm in while running native models
- Weights of LlamaForCausalLM were not initialized from the model checkpoint at meta-llama/Meta-Llama-3-8B-Instruct? HOT 1
- vLLM offline_inference.py failed to run on CPU inference HOT 1
- Unable to save quantized model HOT 1
- Llama 3 performance drop from transformers version 4.37.2 to 4.38.0
- about conflict HOT 1
- Phi3-4k winograde drop from 0515 version to 0516 version
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bigdl.