Coder Social home page Coder Social logo

Comments (5)

dosubot avatar dosubot commented on June 1, 2024 2

🤖

Hello @aminolroayaXgen! I'm Dosu, a friendly bot here to help with your questions and issues while we wait for a human maintainer. I'm ready and eager to assist you with your LlamaIndex queries. Let's navigate through this together!

To increase the number of returned nodes in response.source_nodes with LlamaIndex version 0.8.62, you can adjust the max_top_k parameter in the VectorIndexAutoRetriever class. This parameter sets the maximum number of top results that can be returned.

Here is how you can do it:

retriever = VectorIndexAutoRetriever(
    index=my_index,
    vector_store_info=my_vector_store_info,
    max_top_k=50,  # Increase this value as per your requirement
    similarity_top_k=20,  # This is your current setting
)

The max_top_k parameter is set to 10 by default. If you want to increase the number of returned nodes, you need to set this parameter to a higher value.

Please note that the similarity_top_k parameter is also involved in determining the number of returned nodes. However, as you mentioned, changing this parameter didn't affect the number of returned nodes. This is because the actual number of returned nodes is the minimum of similarity_top_k, max_top_k, and spec.top_k (if specified). Therefore, increasing max_top_k should solve your issue.

This is supported by the following code snippet from the context:

# if query_spec.top_k is specified, then use it
# as long as below max_top_k and similarity_top_k
if spec.top_k is not None:
    similarity_top_k = min(spec.top_k, self._max_top_k, similarity_top_k)

You can find more details in the VectorIndexAutoRetriever class in the LlamaIndex repository.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

from llama_index.

dosubot avatar dosubot commented on June 1, 2024 2

Hi, @aminolroayaXgen,

I'm helping the LlamaIndex team manage their backlog and am marking this issue as stale. From what I understand, you were experiencing a limitation with the number of nodes returned in response.source_nodes when using llama-index 0.8.62. After adjusting the max_top_k parameter in the VectorIndexAutoRetriever class, you were able to increase the number of returned nodes, but this resulted in a significant increase in response time. You are now seeking a way to directly retrieve document fields instead of nodes and exploring potential workarounds.

Could you please confirm if this issue is still relevant to the latest version of the LlamaIndex repository? If it is, please let the LlamaIndex team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LlamaIndex project. If you have any further questions or need assistance, feel free to reach out.

from llama_index.

ravi03071991 avatar ravi03071991 commented on June 1, 2024

@aminolroayaXgen it's working with the following code. You can check the screenshot that it's returning 15 nodes. For debugging would be better to share your code.

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.text_splitter import SentenceSplitter
from llama_index.ingestion import IngestionPipeline

# Load documents
documents = SimpleDirectoryReader(input_files=['ChatQA_Nvidia.pdf']).load_data()
# create the pipeline with transformations
pipeline = IngestionPipeline(
    transformations=[
        SentenceSplitter(chunk_size=2000, chunk_overlap=100),
    ]
)
# run the pipeline
nodes = pipeline.run(documents=documents)
# create index
index = VectorStoreIndex(nodes)

query_engine = index.as_chat_engine(chat_mode="react", similarity_top_k=15)
response = query_engine.chat("Who are the authors of paper?")
print(response)
print(len(response.source_nodes))

PS: I am on the latest llama-index version.
Screenshot 2024-01-20 at 7 27 11 PM

from llama_index.

aminolroayaXgen avatar aminolroayaXgen commented on June 1, 2024

tnx @ravi03071991 . now it retrieves the nodes, but also the response time increases in scale of 10s. It used to be around 4s and now it is 20s. Maybe it is because of I am dumping json files as documents when indexing and I dont want to chunk them. Is there a way to directly retrieve document fields instead of nodes? BTW. The nodes do not have metadata currenlty.

from llama_index.

ravi03071991 avatar ravi03071991 commented on June 1, 2024

@aminolroayaXgen https://docs.llamaindex.ai/en/latest/examples/query_engine/json_query_engine.html# - did you check this? This might help. Another workaround is to create nodes by reading JSON files one after the other and use refine response synthesizer mode. So that each time you retrieve, you actually get topk json files.

from llama_index.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.