Coder Social home page Coder Social logo

tomasonjo / blogs Goto Github PK

View Code? Open in Web Editor NEW
840.0 30.0 258.0 23.18 MB

Jupyter notebooks that support my graph data science blog posts at https://bratanic-tomaz.medium.com/

Jupyter Notebook 99.96% Python 0.04%
graph-algorithms graph data-science neo4j

blogs's People

Contributors

mfyuce avatar mneedham avatar tomasonjo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blogs's Issues

Final Query Throws Error - https://github.com/tomasonjo/blogs/blob/master/llm/neo4jvector_langchain_deepdive.ipynb?ref=blog.langchain.dev

When working through your note book. The final query:
existing_index_return.similarity_search("What do you know about LangChain?", k=1)
Throws an error;
ClientError: {code: Neo.ClientError.Procedure.ProcedureCallFailed} {message: Failed to invoke procedure db.index.vector.queryNodes: Caused by: java.lang.IllegalArgumentException: 'numberOfNearestNeighbours' must be positive}

I'm not sure if this is due to a change in Neo4j's implementation. Full stack trace is below:

�[0;31m---------------------------------------------------------------------------�[0m
�[0;31mClientError�[0m                               Traceback (most recent call last)
Cell �[0;32mIn[29], line 1�[0m
�[0;32m----> 1�[0m existing_index_return�[38;5;241m.�[39msimilarity_search(�[38;5;124m"�[39m�[38;5;124mWhat do you know about LangChain?�[39m�[38;5;124m"�[39m, k�[38;5;241m=�[39m�[38;5;241m1�[39m)

File �[0;32m~/anaconda3/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:530�[0m, in �[0;36mNeo4jVector.similarity_search�[0;34m(self, query, k, **kwargs)�[0m
�[1;32m    520�[0m �[38;5;250m�[39m�[38;5;124;03m"""Run similarity search with Neo4jVector.�[39;00m
�[1;32m    521�[0m 
�[1;32m    522�[0m �[38;5;124;03mArgs:�[39;00m
�[0;32m   (...)�[0m
�[1;32m    527�[0m �[38;5;124;03m    List of Documents most similar to the query.�[39;00m
�[1;32m    528�[0m �[38;5;124;03m"""�[39;00m
�[1;32m    529�[0m embedding �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39membedding�[38;5;241m.�[39membed_query(text�[38;5;241m=�[39mquery)
�[0;32m--> 530�[0m �[38;5;28;01mreturn�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39msimilarity_search_by_vector(
�[1;32m    531�[0m     embedding�[38;5;241m=�[39membedding,
�[1;32m    532�[0m     k�[38;5;241m=�[39mk,
�[1;32m    533�[0m     query�[38;5;241m=�[39mquery,
�[1;32m    534�[0m )

File �[0;32m~/anaconda3/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:625�[0m, in �[0;36mNeo4jVector.similarity_search_by_vector�[0;34m(self, embedding, k, **kwargs)�[0m
�[1;32m    610�[0m �[38;5;28;01mdef�[39;00m �[38;5;21msimilarity_search_by_vector�[39m(
�[1;32m    611�[0m     �[38;5;28mself�[39m,
�[1;32m    612�[0m     embedding: List[�[38;5;28mfloat�[39m],
�[1;32m    613�[0m     k: �[38;5;28mint�[39m �[38;5;241m=�[39m �[38;5;241m4�[39m,
�[1;32m    614�[0m     �[38;5;241m*�[39m�[38;5;241m*�[39mkwargs: Any,
�[1;32m    615�[0m ) �[38;5;241m-�[39m�[38;5;241m>�[39m List[Document]:
�[1;32m    616�[0m �[38;5;250m    �[39m�[38;5;124;03m"""Return docs most similar to embedding vector.�[39;00m
�[1;32m    617�[0m 
�[1;32m    618�[0m �[38;5;124;03m    Args:�[39;00m
�[0;32m   (...)�[0m
�[1;32m    623�[0m �[38;5;124;03m        List of Documents most similar to the query vector.�[39;00m
�[1;32m    624�[0m �[38;5;124;03m    """�[39;00m
�[0;32m--> 625�[0m     docs_and_scores �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39msimilarity_search_with_score_by_vector(
�[1;32m    626�[0m         embedding�[38;5;241m=�[39membedding, k�[38;5;241m=�[39mk, �[38;5;241m*�[39m�[38;5;241m*�[39mkwargs
�[1;32m    627�[0m     )
�[1;32m    628�[0m     �[38;5;28;01mreturn�[39;00m [doc �[38;5;28;01mfor�[39;00m doc, _ �[38;5;129;01min�[39;00m docs_and_scores]

File �[0;32m~/anaconda3/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:594�[0m, in �[0;36mNeo4jVector.similarity_search_with_score_by_vector�[0;34m(self, embedding, k, **kwargs)�[0m
�[1;32m    585�[0m read_query �[38;5;241m=�[39m _get_search_index_query(�[38;5;28mself�[39m�[38;5;241m.�[39msearch_type) �[38;5;241m+�[39m retrieval_query
�[1;32m    586�[0m parameters �[38;5;241m=�[39m {
�[1;32m    587�[0m     �[38;5;124m"�[39m�[38;5;124mindex�[39m�[38;5;124m"�[39m: �[38;5;28mself�[39m�[38;5;241m.�[39mindex_name,
�[1;32m    588�[0m     �[38;5;124m"�[39m�[38;5;124mk�[39m�[38;5;124m"�[39m: k,
�[0;32m   (...)�[0m
�[1;32m    591�[0m     �[38;5;124m"�[39m�[38;5;124mquery�[39m�[38;5;124m"�[39m: kwargs[�[38;5;124m"�[39m�[38;5;124mquery�[39m�[38;5;124m"�[39m],
�[1;32m    592�[0m }
�[0;32m--> 594�[0m results �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mquery(read_query, params�[38;5;241m=�[39mparameters)
�[1;32m    596�[0m docs �[38;5;241m=�[39m [
�[1;32m    597�[0m     (
�[1;32m    598�[0m         Document(
�[0;32m   (...)�[0m
�[1;32m    606�[0m     �[38;5;28;01mfor�[39;00m result �[38;5;129;01min�[39;00m results
�[1;32m    607�[0m ]
�[1;32m    608�[0m �[38;5;28;01mreturn�[39;00m docs

File �[0;32m~/anaconda3/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241�[0m, in �[0;36mNeo4jVector.query�[0;34m(self, query, params)�[0m
�[1;32m    239�[0m �[38;5;28;01mtry�[39;00m:
�[1;32m    240�[0m     data �[38;5;241m=�[39m session�[38;5;241m.�[39mrun(query, params)
�[0;32m--> 241�[0m     �[38;5;28;01mreturn�[39;00m [r�[38;5;241m.�[39mdata() �[38;5;28;01mfor�[39;00m r �[38;5;129;01min�[39;00m data]
�[1;32m    242�[0m �[38;5;28;01mexcept�[39;00m CypherSyntaxError �[38;5;28;01mas�[39;00m e:
�[1;32m    243�[0m     �[38;5;28;01mraise�[39;00m �[38;5;167;01mValueError�[39;00m(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124mCypher Statement is not valid�[39m�[38;5;130;01m\n�[39;00m�[38;5;132;01m{�[39;00me�[38;5;132;01m}�[39;00m�[38;5;124m"�[39m)

File �[0;32m~/anaconda3/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241�[0m, in �[0;36m<listcomp>�[0;34m(.0)�[0m
�[1;32m    239�[0m �[38;5;28;01mtry�[39;00m:
�[1;32m    240�[0m     data �[38;5;241m=�[39m session�[38;5;241m.�[39mrun(query, params)
�[0;32m--> 241�[0m     �[38;5;28;01mreturn�[39;00m [r�[38;5;241m.�[39mdata() �[38;5;28;01mfor�[39;00m r �[38;5;129;01min�[39;00m data]
�[1;32m    242�[0m �[38;5;28;01mexcept�[39;00m CypherSyntaxError �[38;5;28;01mas�[39;00m e:
�[1;32m    243�[0m     �[38;5;28;01mraise�[39;00m �[38;5;167;01mValueError�[39;00m(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124mCypher Statement is not valid�[39m�[38;5;130;01m\n�[39;00m�[38;5;132;01m{�[39;00me�[38;5;132;01m}�[39;00m�[38;5;124m"�[39m)

File �[0;32m~/anaconda3/lib/python3.11/site-packages/neo4j/_sync/work/result.py:266�[0m, in �[0;36mResult.__iter__�[0;34m(self)�[0m
�[1;32m    264�[0m     �[38;5;28;01myield�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39m_record_buffer�[38;5;241m.�[39mpopleft()
�[1;32m    265�[0m �[38;5;28;01melif�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39m_streaming:
�[0;32m--> 266�[0m     �[38;5;28mself�[39m�[38;5;241m.�[39m_connection�[38;5;241m.�[39mfetch_message()
�[1;32m    267�[0m �[38;5;28;01melif�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39m_discarding:
�[1;32m    268�[0m     �[38;5;28mself�[39m�[38;5;241m.�[39m_discard()

File �[0;32m~/anaconda3/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:180�[0m, in �[0;36mConnectionErrorHandler.__getattr__.<locals>.outer.<locals>.inner�[0;34m(*args, **kwargs)�[0m
�[1;32m    178�[0m �[38;5;28;01mdef�[39;00m �[38;5;21minner�[39m(�[38;5;241m*�[39margs, �[38;5;241m*�[39m�[38;5;241m*�[39mkwargs):
�[1;32m    179�[0m     �[38;5;28;01mtry�[39;00m:
�[0;32m--> 180�[0m         func(�[38;5;241m*�[39margs, �[38;5;241m*�[39m�[38;5;241m*�[39mkwargs)
�[1;32m    181�[0m     �[38;5;28;01mexcept�[39;00m (Neo4jError, ServiceUnavailable, SessionExpired) �[38;5;28;01mas�[39;00m exc:
�[1;32m    182�[0m         �[38;5;28;01massert�[39;00m �[38;5;129;01mnot�[39;00m asyncio�[38;5;241m.�[39miscoroutinefunction(�[38;5;28mself�[39m�[38;5;241m.�[39m__on_error)

File �[0;32m~/anaconda3/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:851�[0m, in �[0;36mBolt.fetch_message�[0;34m(self)�[0m
�[1;32m    847�[0m �[38;5;66;03m# Receive exactly one message�[39;00m
�[1;32m    848�[0m tag, fields �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39minbox�[38;5;241m.�[39mpop(
�[1;32m    849�[0m     hydration_hooks�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39mresponses[�[38;5;241m0�[39m]�[38;5;241m.�[39mhydration_hooks
�[1;32m    850�[0m )
�[0;32m--> 851�[0m res �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39m_process_message(tag, fields)
�[1;32m    852�[0m �[38;5;28mself�[39m�[38;5;241m.�[39midle_since �[38;5;241m=�[39m perf_counter()
�[1;32m    853�[0m �[38;5;28;01mreturn�[39;00m res

File �[0;32m~/anaconda3/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:376�[0m, in �[0;36mBolt5x0._process_message�[0;34m(self, tag, fields)�[0m
�[1;32m    374�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_server_state_manager�[38;5;241m.�[39mstate �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mbolt_states�[38;5;241m.�[39mFAILED
�[1;32m    375�[0m �[38;5;28;01mtry�[39;00m:
�[0;32m--> 376�[0m     response�[38;5;241m.�[39mon_failure(summary_metadata �[38;5;129;01mor�[39;00m {})
�[1;32m    377�[0m �[38;5;28;01mexcept�[39;00m (ServiceUnavailable, DatabaseUnavailable):
�[1;32m    378�[0m     �[38;5;28;01mif�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mpool:

File �[0;32m~/anaconda3/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:247�[0m, in �[0;36mResponse.on_failure�[0;34m(self, metadata)�[0m
�[1;32m    245�[0m handler �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mhandlers�[38;5;241m.�[39mget(�[38;5;124m"�[39m�[38;5;124mon_summary�[39m�[38;5;124m"�[39m)
�[1;32m    246�[0m Util�[38;5;241m.�[39mcallback(handler)
�[0;32m--> 247�[0m �[38;5;28;01mraise�[39;00m Neo4jError�[38;5;241m.�[39mhydrate(�[38;5;241m*�[39m�[38;5;241m*�[39mmetadata)

�[0;31mClientError�[0m: {code: Neo.ClientError.Procedure.ProcedureCallFailed} {message: Failed to invoke procedure `db.index.vector.queryNodes`: Caused by: java.lang.IllegalArgumentException: 'numberOfNearestNeighbours' must be positive}

Error at # Define random walk query

Following along. I have worked within NEO4J desktop and the data appears correct.
I am getting at error trying to implement your code for word2vec.

`# Define random walk query
random_walks_query = """

MATCH (node)
CALL gds.alpha.randomWalk.stream('all', {
start: id(node),
steps: 15,
walks: 5
})
YIELD nodeIds
// Return the names or the titles
RETURN [id in nodeIds |
coalesce(gds.util.asNode(id).name,
gds.util.asNode(id).title)] as walks

"""

Fetch data from Neo4j

with driver.session() as session:
walks = session.run(random_walks_query)

Train the word2vec model

clean_walks = [row['walks'] for row in walks]
model = Word2Vec(clean_walks, sg=1, window=5, size=100)

Inspect results

model.most_similar('olive oil')`

I am getting:

TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

TypeError Traceback (most recent call last)
in
20 # Train the word2vec model
21 clean_walks = [row['walks'] for row in walks]
---> 22 model = Word2Vec(clean_walks, sg=1, window=5, size=100)
23 # Inspect results
24 model.most_similar('olive oil')

Movie_recommendations model.encoder() error

I've been following along with your following notebook and came across an error was wondering if you might have any insights as how to resolve it? I am fairly new to PyTorch and PyG, so wasn't sure how to fix the error. Any advice would be greatly appreciated!

I have replaced the data with my own, but my data is very similar to the movie example you used. I have not had any other errors aside from the following cell (and the training loop).

Cell:

# Due to lazy initialization, we need to run one model step so the number
# of parameters can be inferred:
with torch.no_grad():
    model.encoder(train_data.x_dict, train_data.edge_index_dict)

optimizer = torch.optim.Adam(model.parameters(), lr=0.01)

Error:

`ValueError: `MessagePassing.propagate` only supports `torch.LongTensor` of shape `[2, num_messages]` or `torch_sparse.SparseTensor` for argument `edge_index`.`

P.S.-Your notebook has been really awesome and all the notes are very helpful!

Could not use apoc procedure

Used a url that worked with GraphDatabase. It is throwing the above error. I changed some settings in neo4j.conf. It seems it also requires now an apoc.conf that I put in same directory as neo4j.conf It contains apoc.export.file.enabled=true. When I ask to show procedures the apoc.meta.data does not show. Did u do a config wrong or is there a way to use the GraphDatabase (which works). I am trying to use the vector database .

thanks

JavaNullPointer - Neo.ClientError.Procedure.ProcedureCallFailed - when running cosine similarity for embedding

https://github.com/tomasonjo/blogs/blob/master/llm/Neo4jOpenAIApoc.ipynb
I've ran into error when running retrieve_context() function:

Neo.ClientError.Procedure.ProcedureCallFailed
image

After looking for a while, I've found out that it's happen with the followings code:

// retrieve the embedding of the question
CALL apoc.ml.openai.embedding([
apiKey) YIELD embedding
// match relevant movies
MATCH (m:Movie)
WITH m, gds.similarity.cosine(embedding, m.embedding) AS score
ORDER BY score DESC
// limit the number of relevant documents
LIMIT toInteger($k)

and then fixed as bellow (adding "WHERE ..." clause)

    // retrieve the embedding of the question
CALL apoc.ml.openai.embedding([$question], $apiKey) YIELD embedding
// match relevant movies
MATCH (m:Movie)
WHERE m.embedding IS NOT NULL AND size(m.embedding) = 1536
WITH m, gds.similarity.cosine(embedding, m.embedding) AS score
ORDER BY score DESC
// limit the number of relevant documents
LIMIT toInteger($k)

Thank you very much for helpfull notebook <

AttributeError: 'NoneType' object has no attribute 'nodes'

threr are some prombles when using process_response and convert_to_graph_documents:
AttributeError: 'NoneType' object has no attribute 'nodes'
in
llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_document = llm_transformer.process_response(document)
and
llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_documents = llm_transformer.convert_to_graph_documents([document]) graph.add_graph_documents( graph_documents, baseEntityLabel=True, include_source=True )
who can help me?

DevOp RAg

Hey,

it seems this code is not working , it does not generate embedding in the databse ? is there a special configuration I need to do?
it does generate a vector index
`

vector_index = Neo4jVector.from_existing_graph(
OpenAIEmbeddings(),
url=url,
username=username,
password=password,
database='sss',
index_name='tasks',
node_label="Task",
text_node_properties=['name', 'description', 'status'],
embedding_node_property='embedding',
)`

How to go with HuggingFace* instead of ChatOpenAI?

Hi @tomasonjo , thank you very much for sharing this very informative material.

On this notebook, how could I change

llm = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0)

to

from langchain.llms import HuggingFaceHub

llm = HuggingFaceHub(
    repo_id=repo_id, model_kwargs={"temperature": TEMPERATURE, "max_length": MAX_TOKENS}
)

or any other HuggingFacePipeline, and still make the tutorial work?

Of course, cypher_chain's llms would also have to be changed to other pipelines, but I have not got there yet.

The error I get is:

File ~/Projects/blogs/openaifunction_constructing_graph.py:277, in extract_and_store_graph(document, nodes, rels)
    274 
    275 extract_chain = get_extraction_chain(nodes, rels)
--> 277 data = extract_chain.run(document.page_content)
    278

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:507, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    505     if len(args) != 1:
    506         raise ValueError("`run` supports only one positional argument.")
--> 507     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    508         _output_key
    509     ]
    511 if kwargs and not args:
    512     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    513         _output_key
    514     ]

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    310 except BaseException as e:
    311     run_manager.on_chain_error(e)
--> 312     raise e
    313 run_manager.on_chain_end(outputs)
    314 final_outputs: Dict[str, Any] = self.prep_outputs(
    315     inputs, outputs, return_only_outputs
    316 )

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    299 run_manager = callback_manager.on_chain_start(
    300     dumpd(self),
    301     inputs,
    302     name=run_name,
    303 )
    304 try:
    305     outputs = (
--> 306         self._call(inputs, run_manager=run_manager)
    307         if new_arg_supported
    308         else self._call(inputs)
    309     )
    310 except BaseException as e:
    311     run_manager.on_chain_error(e)

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:104, in LLMChain._call(self, inputs, run_manager)
     98 def _call(
     99     self,
    100     inputs: Dict[str, Any],
    101     run_manager: Optional[CallbackManagerForChainRun] = None,
    102 ) -> Dict[str, str]:
    103     response = self.generate([inputs], run_manager=run_manager)
--> 104     return self.create_outputs(response)[0]

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:258, in LLMChain.create_outputs(self, llm_result)
    256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
    257     """Create outputs from response."""
--> 258     result = [
    259         # Get the text of the top generated string.
    260         {
    261             self.output_key: self.output_parser.parse_result(generation),
    262             "full_generation": generation,
    263         }
    264         for generation in llm_result.generations
    265     ]
    266     if self.return_final_only:
    267         result = [{self.output_key: r[self.output_key]} for r in result]

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:261, in <listcomp>(.0)
    256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
    257     """Create outputs from response."""
    258     result = [
    259         # Get the text of the top generated string.
    260         {
--> 261             self.output_key: self.output_parser.parse_result(generation),
    262             "full_generation": generation,
    263         }
    264         for generation in llm_result.generations
    265     ]
    266     if self.return_final_only:
    267         result = [{self.output_key: r[self.output_key]} for r in result]

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:174, in PydanticAttrOutputFunctionsParser.parse_result(self, result, partial)
    173 def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
--> 174     result = super().parse_result(result)
    175     return getattr(result, self.attr_name)

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:157, in PydanticOutputFunctionsParser.parse_result(self, result, partial)
    156 def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
--> 157     _result = super().parse_result(result)
    158     if self.args_only:
    159         pydantic_args = self.pydantic_schema.parse_raw(_result)  # type: ignore

File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:26, in OutputFunctionsParser.parse_result(self, result, partial)
     24 generation = result[0]
     25 if not isinstance(generation, ChatGeneration):
---> 26     raise OutputParserException(
     27         "This output parser can only be used with a chat generation."
     28     )
     29 message = generation.message
     30 try:

OutputParserException: This output parser can only be used with a chat generation.

Error while running "wd = uc.Chrome(options=options)"

hi, when I run the following code:


import undetected_chromedriver.v2 as uc
from pyvirtualdisplay import Display

display = Display(visible=0, size=(800, 600))
display.start()

options = uc.ChromeOptions()
options.add_argument("--no-sandbox")
wd = uc.Chrome(options=options)


I enconter this error :

WebDriverException Traceback (most recent call last)
in <cell line: 9>()
7 options = uc.ChromeOptions()
8 options.add_argument("--no-sandbox")
----> 9 wd = uc.Chrome(options=options)

4 frames
/usr/local/lib/python3.10/dist-packages/selenium/webdriver/common/service.py in assert_process_still_running(self)
117 return_code = self.process.poll()
118 if return_code:
--> 119 raise WebDriverException(f"Service {self.path} unexpectedly exited. Status code was: {return_code}")
120
121 def is_connectable(self) -> bool:

WebDriverException: Message: Service /root/.local/share/undetected_chromedriver/1b2da929686a5f20_chromedriver unexpectedly exited. Status code was: 1


I would be thankful if anyone could help me.

error built-in class

Hi

I tried running the code but I get the following error
<class 'main.RebelComponent'> is a built-in class

Would really appreciate your help in sorting it out. Thanks

Embedding similarity search

I created a vector index with a hugging face embedding. I see the embeddings in the graph. The vector_index.similarity_search always returns an empty response. Embeddings.embed_query does give a vector. Am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.