leon-sander / local_multimodal_ai_chat Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v3.0
License: GNU General Public License v3.0
Once everything was installed following the repo orders, I ran into the following error when trying to load a pdf file and it was analyzed.
Once everything was installed following the repo orders, I ran into the following error when trying to load a PDF file and it was analyzed.
The following commands fixed the problem, I don't need to change langchain version.
pip uninstall sentence-transformers
pip install sentence-transformers==2.2.2
Then I had another error that I raised in another issues
I am getting this error . Any idea what might be the issue ?
llm_chains.py", line 43, in load_vectordb
persistent_client = chromadb.PersistentClient(chromadb)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'chromadb' has no attribute 'PersistentClient'
Great, really great project! Can I suggest you consider Ollama to allow for alternative models to be tested out? Also for folks less familiar with downloading models from HF etc.
I get this error I could not find a solution for it! Can any one help me please!!
$ python3 test.py
Traceback (most recent call last):
File "/home/bakil/demo_bitirme/local_multimodal_ai_chat/test.py", line 4, in
vector_db = load_vectordb(create_embeddings())
^^^^^^^^^^^^^^^^^^^
File "/home/bakil/demo_bitirme/local_multimodal_ai_chat/llm_chains.py", line 25, in create_embeddings
return HuggingFaceInstructEmbeddings(model_name=embeddings_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bakil/demo_bitirme/local_multimodal_ai_chat/chat_venv/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py", line 158, in init
self.client = INSTRUCTOR(
^^^^^^^^^^^
File "/home/bakil/demo_bitirme/local_multimodal_ai_chat/chat_venv/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 194, in init
modules = self._load_sbert_model(
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
Hi!
I have changed the code of the youtube video to the new version published here in github and I am having this problem with CUDA. I have a GPU Intel Iris Xe Graphics. With the code in the video it works correctly and, although it takes 10 minutes to respond, it ends up responding correctly. Now, at the beginning it shows me well the chatbot doing streamlit run app.py, and as soon as I write something to make it respond, I get this error and in the browser it appears as if streamlit has been disconnected. Any advice?
I can't get it to respond well anymore and I don't know what other solutions to try.
I post screenshots of the error below.
Thank you in advance!
Hey Leon! Thank you so much for taking the time to put all this together! I had exactly the same project in mind and am really glad I came across your YouTube video!
When trying to run this on Windows I had errors with creating a new chat session as the filename had the character :
in the name.
I resolved this with a very basic fix of importing datetime into app.py
and changing the save_chat_history()
function to the below:
def save_chat_history():
if st.session_state.history != []:
if st.session_state.session_key == "new_session":
now = datetime.now() # current date and time
st.session_state.new_session_key = now.strftime("%m%d%Y%H%M%S") + ".json"
save_chat_history_json(st.session_state.history, config["chat_history_path"] + st.session_state.new_session_key)
else:
save_chat_history_json(st.session_state.history, config["chat_history_path"] + st.session_state.session_key)
Thought I'd create an issue just in case someone else has the same problem. Thank you again for the time and effort you put into this.
Hi, Thanks for the awesome video. I run the code and get the following error when importing a PDF File:
TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
Traceback:
File "...\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "...\app.py", line 149, in main()
File "...\app.py", line 91, in main add_documents_to_db(uploaded_pdf)
File "...\pdf_handler.py", line 31, in add_documents_to_db vector_db = load_vectordb(create_embeddings())
File "...\llm_chains.py", line 25, in create_embeddings return HuggingFaceInstructEmbeddings(model_name=embeddings_path)
File "...\lib\site-packages\langchain_community\embeddings\huggingface.py", line 153, in init
self.client = INSTRUCTOR(
File "..\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 194, in init
modules = self._load_sbert_model(
When importing an image, it works very well. good. Please, help, support with this error.
I am working in windows and here chat session json files are saved in different format,here there is minor difference where element dictionaries don't have key "type" and values "human" or "ai"
Example
[
{content:"hi", additional kwargs:{}, example:False},
{content:"hello I am ai Chatbot ", additional kwargs:{}, example: False},
]
load_chat_history_json() function in utils.py will be affected by this
Error when loading PDF....( drag and drop or through browse for file.
"load INSTRUCTOR_Transformer
max_seq_length 512........"
OSError: [WinError -529697949] Windows Error 0xe06d7363
Traceback:
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\app.py", line 118, in
main()
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\app.py", line 94, in main
llm_answer = handle_image(uploaded_image.getvalue(), st.session_state.user_question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\image_handler.py", line 11, in handle_image
chat_handler = Llava15ChatHandler(clip_model_path="./models/llava/ggml-model-q5_k.gguf")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1235, in init
self.clip_ctx = self._llava_cpp.clip_model_load(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\llama_cpp\llava_cpp.py", line 174, in clip_model_load
return _libllava.clip_model_load(fname, verbosity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
all file paths and code are as stated in video, tried searching on stack overflow but no proper solution found
Hello,
Is this code written to run only on CPU. I dont think GPU is being used and the response time is very slow.
If it is written to be run on CPU for now then can you suggest the changes (device, gpu_layers) that I would need to make to make it run on GPU?
TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
Traceback:
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\chat_venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\app.py", line 133, in
main()
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\app.py", line 76, in main
add_documents_to_db(uploaded_pdf)
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\pdf_handler.py", line 29, in add_documents_to_db
vector_db = load_vectordb(create_embeddings())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\llm_chains.py", line 25, in create_embeddings
return HuggingFaceInstructEmbeddings(model_name=embeddings_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\chat_venv\Lib\site-packages\langchain_community\embeddings\huggingface.py", line 153, in init
self.client = INSTRUCTOR(
^^^^^^^^^^^
File "C:\Users\mihya\Desktop\local_multimodal_ai_chat-main\chat_venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 197, in init
modules = self._load_sbert_model(
^^^^^^^^^^^^^^^^^^^^^^^
Hi. Is it possible to download the embeddings model locally? And how do I tweak the codes to use the embedding models locally? As I am currently on work laptop so there is some sort of network restrictions so I have to download the embeddings model and use it. Thank you.
I get this error when I try to upload files. Any idea how to fix it? Thanks
File "C:\Projects\AIChat\chat_venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Projects\AIChat\app.py", line 149, in
main()
File "C:\Projects\AIChat\app.py", line 91, in main
add_documents_to_db(uploaded_pdf)
File "C:\Projects\AIChat\pdf_handler.py", line 31, in add_documents_to_db
vector_db = load_vectordb(create_embeddings())
^^^^^^^^^^^^^^^^^^^
File "C:\Projects\AIChat\llm_chains.py", line 25, in create_embeddings
return HuggingFaceInstructEmbeddings(model_name=embeddings_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\AIChat\chat_venv\Lib\site-packages\langchain_community\embeddings\huggingface.py", line 153, in init
self.client = INSTRUCTOR(
^^^^^^^^^^^
File "C:\Projects\AIChat\chat_venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 190, in init
modules = self._load_sbert_model(
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
Hi Leon!
PS C:\Users\Admin> & C:/Users/Admin/AppData/Local/Programs/Python/Python311/python.exe c:/Users/Admin/Desktop/local_multimodal_ai_chat-main/local_multimodal_ai_chat-main/llm_chains.py
Traceback (most recent call last):
File "c:\Users\Admin\Desktop\local_multimodal_ai_chat-main\local_multimodal_ai_chat-main\llm_chains.py", line 20, in
def create_llm(model_path = config["ctransformers"]["model_path"]["large"], model_type = config["transformers"]["model_type"], model_config = config["ctransformers"]["model_config"]):
~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'ctransformers'
Any specific changes required for CPU only inference? I don't have a GPU...
ı have an problem when working pdf chat
TypeError: load_retrieval_chain() missing 1 required positional argument: 'vector_db'
Traceback:
File "C:\Users\hcy53\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Users\hcy53\local_multimodal_ai_chat-main\app.py", line 151, in
main()
File "C:\Users\hcy53\local_multimodal_ai_chat-main\app.py", line 131, in main
llm_chain = load_chain()
File "C:\Users\hcy53\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
File "C:\Users\hcy53\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 241, in call
return self._get_or_create_cached_value(args, kwargs)
File "C:\Users\hcy53\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 268, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "C:\Users\hcy53\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 324, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "C:\Users\hcy53\local_multimodal_ai_chat-main\app.py", line 35, in load_chain
return load_pdf_chat_chain()
File "C:\Users\hcy53\local_multimodal_ai_chat-main\llm_chains.py", line 45, in load_pdf_chat_chain
return pdfChatChain()
File "C:\Users\hcy53\local_multimodal_ai_chat-main\llm_chains.py", line 55, in init
self.llm_chain = load_retrieval_chain(llm, vector_db)
ValueError: '2024-03-26 11:28:16' is not in list
Traceback:
File "C:\Users\Chirag\OneDrive\Desktop\chatbot2\venv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Users\Chirag\OneDrive\Desktop\chatbot2\local_multimodal_ai_chat\app.py", line 149, in
main()
File "C:\Users\Chirag\OneDrive\Desktop\chatbot2\local_multimodal_ai_chat\app.py", line 70, in main
index = chat_sessions.index(st.session_state.session_index_tracker)
Excellent tutorial and thanks a lot for sharing the knowledge.
if send_button or st.session_state.send_input:
if uploaded_image:
with st.spinner("Processing image..."):
user_message = "Describe this image in detail please."
if st.session_state.user_question != "":
user_message = st.session_state.user_question
st.session_state.user_question = ""
llm_answer = handle_image(uploaded_image.getvalue(), st.session_state.user_question) ---------->>>>
chat_history.add_user_message(user_message)
chat_history.add_ai_message(llm_answer)
Referring to the line indicated in the above code snippet, "user_message" should be passed as input to the handle_image function.
Currently, "st.session_state.user_question" is passed and carries a null string (in both cases). The model accepts null string and does not throw an error however. It is just that it does not consider message entered in the prompt.
I am working on windows,where I encountered this type error it states that INSTRUCTOR._load_sbert_model() got an unexpected argument 'token', I guess this error occurred due to version issues of sentences -transformers , currently I am using sentence -transformers version 2.3.1, previously I was using version 2.2.2 where it causes an import error that states that dependencies of InstructorEmbedding not found, I guess the root issue is related to sentence -transformers versions
I was following along with the video but I started to have issues with the chat history section and specifically with json. I decided to just copy the files and try to run it as I will admit I am new to all of this.
Anyhow, when I run it get the error below.
File "C:\xxxxx\database_operations.py", line 96, in get_all_chat_history_ids
cursor.execute(query)
sqlite3.OperationalError: no such table: messages
when i try to load a book in pdf format , size of book is around 6.5mb .i waited for more than 90min still it shows processing .
can u please help me to fast the loading speed of pdf .
Following the tutorial, I have implemented the code along with additional features. However, I am encountering difficulties understanding the changes made in the project's main branch with respect to the tutorial.
While I would prefer not to clone the repository due to my existing code modifications, I am eager to integrate the improvements from the main branch, while understanding them as well.
Could you please suggest a suitable approach to achieve this integration without losing my local changes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.