Comments (5)
Use the environment variable OPENAI_API_KEY to provide your api key.
e.g. You could set this in Python with import os; os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
.
from llama_index.
yeah @kebanks2 you should set an api key. you can register an api key here: https://beta.openai.com/account/api-keys
from llama_index.
yeah @kebanks2 you should set an api key. you can register an api key here: https://beta.openai.com/account/api-keys
Got it but I'm still having issues. Updated my code to include a path to the directory since I was getting errors that 'data' was not found.
import os
from gpt_index import GPTSimpleVectorIndex, SimpleDirectoryReader
current_dir = os.getcwd()
# construct the path to the gpt_index/examples/paul_graham_essay directory
data_dir = os.path.join(current_dir, 'gpt_index', 'examples', 'paul_graham_essay')
documents = SimpleDirectoryReader(data_dir).load_data()
os.environ['OPENAI_API_KEY'] = "XXXXX"
index = GPTSimpleVectorIndex(documents)
response = index.query("Summarize the first paragraph")
print(response)
C:\Users\10000162\PycharmProjects\pythonProject\venv\Scripts\python.exe C:\Users\10000162\PycharmProjects\pythonProject\main.py
Traceback (most recent call last):
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\connection.py", line 414, in connect
self.sock = ssl_wrap_socket(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\util\ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\10000162\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 501, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\10000162\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1041, in _create
self.do_handshake()
File "C:\Users\10000162\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1310, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\10000162\PycharmProjects\pythonProject\main.py", line 13, in <module>
index = GPTSimpleVectorIndex(documents)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\indices\vector_store\simple.py", line 48, in __init__
super().__init__(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\indices\vector_store\base.py", line 45, in __init__
super().__init__(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\indices\base.py", line 73, in __init__
self._prompt_helper = prompt_helper or PromptHelper.from_llm_predictor(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\indices\prompt_helper.py", line 73, in from_llm_predictor
return self(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\indices\prompt_helper.py", line 51, in __init__
self._tokenizer = tokenizer or globals_helper.tokenizer
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\gpt_index\utils.py", line 37, in tokenizer
enc = tiktoken.get_encoding("gpt2")
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\tiktoken\registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\tiktoken_ext\openai_public.py", line 11, in gpt2
mergeable_ranks = data_gym_to_mergeable_bpe_ranks(
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\tiktoken\load.py", line 67, in data_gym_to_mergeable_bpe_ranks
vocab_bpe_contents = read_file_cached(vocab_bpe_file).decode()
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\tiktoken\load.py", line 40, in read_file_cached
contents = read_file(blobpath)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\tiktoken\load.py", line 18, in read_file
return requests.get(blobpath).content
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\10000162\PycharmProjects\pythonProject\venv\lib\site-packages\requests\adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
Process finished with exit code 1
Any clue why I might be getting this error?
from llama_index.
Fixed w/ pip install python-certifi-win32
from llama_index.
oh thanks for posting the fix! i have a TODO to add this to the docs
from llama_index.
Related Issues (20)
- [Bug]: openai http_client type error HOT 6
- [Bug]: OpenSearch ConnectionError(Timeout context manager should be used inside a task) HOT 2
- use route query engine : http pool time out error HOT 6
- [Bug]: Marvin Metadata Extractor Demo code not working HOT 1
- [Bug]: BM25 Retriever - Corpus uses default MetadataMode while reading content from nodes instead of MetadataMode.EMBED or user provided option HOT 1
- [Question]: How can i get all nodes from the PGVectorDB? HOT 1
- [Bug]: AttributeError: 'tuple' object has no attribute 'score' HOT 1
- Difference between using a persistent storage like S3 vs Using a Vector DB to store data in LLAMA INDEX[Question]: HOT 3
- [Question]: Is `llama_index` thead-safe? HOT 5
- [Bug]: Bedrock Cohere embeddings are not working as expected. HOT 5
- [Bug]: Code in Guidance Pydantic Program doc not working HOT 3
- [Bug]: NameError: name 'AgentChatResponse' is not defined in Using Meta-Llama-3-70B-Instruct with HuggingFace Inference API HOT 6
- [Question]: How to create a multiDocagent using function call with bedrock llms HOT 8
- [Question]: Chat engine takes long time to generate output for the first query HOT 3
- [Bug]: FirestoreKVStore's aget_all raises AttributeError when collection is not empty HOT 1
- [Feature Request]: Please support stream_chat for vllm
- [Question]: BM25 Retriever takes long time to load with docstore are its parameter HOT 2
- [Documentation]: notebook docs/examples/prompts/prompts_rag.ipynb not working HOT 9
- [Question]: AttributeError: 'NoneType' object has no attribute 'search' HOT 2
- [Question]: why PostgresKVStore table class does not match my postgres vector table schema? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama_index.