Coder Social home page Coder Social logo

therealexpertai / nlapi-python Goto Github PK

View Code? Open in Web Editor NEW
48.0 48.0 11.0 277 KB

Python Client for the expert.ai Natural Language API

Home Page: https://developer.expert.ai

License: Apache License 2.0

Python 100.00%
expertai nlp-api nlu-engine python

nlapi-python's People

Contributors

acapitani avatar andreabelli-eai avatar avarone-github avatar camato-es avatar davidbakereffendi avatar marcobellei-eai avatar nluninja avatar sourabhvarshney111 avatar zlatev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nlapi-python's Issues

Cannot send two requests withot restarting kernel

Hello, I'm trying to replicate the request to API described in play_with_expertai_nlapi_v2 notebook.ipynb.
I can successfully run the first request to client.specific_resource_analysis but, if I try to run it again, I receive a Error: Incorrect padding.
If I restart the Python Kernel, I can correctly continue invoking the API (just once).
Can you please help me?
If this can help, I'm using the free account for testing purposes.
Thank you in advance,
Francesca

Issues with importing expertai on Ubuntu 18.04

Hi there,

I've been trying to extract topics out of articles using the expertai API. I've been successfully doing this locally on my Windows 10 pc; however, taking the script over to my Ubuntu 18.04 VPS I've been getting a - ModuleNotFoundError: No module named 'expertai'.

I've followed all instructions given - cloned the repo and installed all dependencies with pip3 and I'm running the script with python3.

Script used:

import os

os.chdir(r"/videogenerators/processing/nlapi-python") # path to cloned directory
from expertai.client import ExpertAiClient

os.environ["EAI_USERNAME"] = "..."
os.environ["EAI_PASSWORD"] = "..."
eai = ExpertAiClient()
language = 'en'
text = "some article..."
response = eai.full_analysis(body={"document": {"text":text}}, params={'language': language})
data = response.json["data"]

for label in data["topics"]:
    if label["winner"] == True:
        print(label["label"])

I know the path is correct for sure. What else could be the issue?

Cheers,
Selin

expertai.nlapi.common.errors.ExpertAiRequestError: Response status code: 500 - Python SDK

Successfully installed expertai-nlapi-2.1.3 - So I don't think it is the same problem as a previously closed item.

I have an export from a product (Roam Research) that is return results in a psedu-markdown template. I've extracted it out and then run the following code to clean it up a little bit:

eaiString = eaiString + s.get('string').replace('\n', ' ').replace('\r', ' ').replace('{',' ').replace('}',' ') + "\n"
I then called the SDK with the following code

document = client.specific_resource_analysis(
        body={"document": {"text":text}}, 
        params={'language':'en', 'resource': 'relevants'})

The text is about 6500 lines and 658k characters.

I had submitted a much smaller excerpt earlier with out any problems so I don't think its an authentication issue.

Failed to fetch the Bearer Token. Error: 500-Internal Server Error

Hi,

I hope I'm not creating more work for you.... I tried using version 2.1.1 after making that patch (see other issue) and I am now getting an error response of

Failed to fetch the Bearer Token. Error: 500-Internal Server Error

and looking at my account stats, no changes to them (e.g. used characters). But a half-hour ago, I was doing testing and it worked, no auth problems.

So I uninstalled 2.1.1, reinstalled 2.1.0 (and fixed the two lines) and everything works ok - did nothing else. Those two lines in topic.py surely don't affect communication with the server, so no idea what is going on here. The change log only shows topic.py being updated for 2.1.1 so this is a bit strange.

Doug

No topics for one document

Probably not a bug - out of 1600 documents, only this one (after a full analysis) returned a null set of topics. Other functions like mainSyncons and sentences had associated data. A note in the documentation that the output might contain 0 topics would be useful.

"I recently stayed at the Hard Rock Hotel in Chicago, Il. From the start, the experience was bad. The room was filthy, there were no towels, and the front desk did nothing to rectify the situation. I will never stay there again. I could not have been more dissatisfied."

I did a quick scan of the list of topics in the Developer website and it does look like none of them would have been triggered by this document. It is an interesting list and oddly specific for some categories. Are there any plans to expand it and do you know of Empath? (https://github.com/Ejhfast/empath-client)

topics with id of 0

Hi,

I am using version 2.10 and Python 3.8

I was trying out your service and can not get anything to work using either full_analysis() or specific_resource_analysis()

The problem seems to be topics with ids of 0. In making an API call, the data does get to the server and is processed (I can see my character usage go up), but the response is not being processed properly. Here is the relevant section of the stack trace:

File "/Volumes/Phil/projects/research/holistic/src/holistic/features/featset/expertai.py", line 157, in make_features
results = self.client.specific_resource_analysis(
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/cloud/client.py", line 101, in specific_resource_analysis
return self.process_response(response)
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/cloud/client.py", line 79, in process_response
return ObjectMapper().read_json(response.json)
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/cloud/object_mapper.py", line 106, in read_json
dm = DataModel(**data)
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/common/model/data_model.py", line 88, in init
self._topics = [Topic(**t) for t in topics]
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/common/model/data_model.py", line 88, in
self._topics = [Topic(**t) for t in topics]
File "/Volumes/Phil/projects/research/holistic/venv/lib/python3.8/site-packages/expertai/nlapi/common/model/topic.py", line 37, in init
raise MissingArgumentError("Missing required argument: id")
expertai.nlapi.common.errors.MissingArgumentError: Missing required argument: id

I patched client.py to print out the raw response from the server, here is the relevant section (all topics do have id numbers):

"topics": [
  {
    "id": 114,
    "label": "construction industry",
    "score": 4.09,
    "winner": true
  },
  {
    "id": 0,
    "label": "clothing",
    "score": 0.6,
    "winner": false
  },

Looking at topic.py, the problem is obviously

    if not (id or id_):
        raise MissingArgumentError("Missing required argument: id")

Testing "if not (0 or None)", the condition succeeds. So to get things working I have changed it to

    if id is None and id_ is None:
        raise MissingArgumentError("Missing required argument: id")

The line assigning a value to self._id also needs fixed, of course. I don't know if ids can actually be 0s (you tell me, probably not but they are being returned), but to get further done down the road, I will assume so for the time being.

Otherwise, the return data I see from the test looks ok, but I need to check the details. Thanks for the free account, I'm doing some MSc research.

P.S. I doubt I'll hit 10 million characters in a month, but if I do, any chance of a credit/reset on the count? - this is a pretty basic bug I wouldn't have expected to run into, so I am really wondering what happened. email is [email protected]

Doug

Endpoint for sending multiple requests

Hello,
I was a participant from the expert.ai's NLP hackathon. I was using the API for to find sentiments of around 700 tweets. When I put that statement in tqdm, (this takes approximately 11 minutes to finish) I see the speed as an iteration per second. Previously, I was trying to find sentiments of 10k tweets. (that was taking very long time to compute) Is there another endpoint which can return the sentiments in batches? (meaning multiple tweets at once?) Here is my code.
image

Documentation wrong for linguistic analysis function

Looking at https://docs.expert.ai/nlapi/latest/reference/output/linguistic-analysis/ in the 'phrases', 'sentences' and 'paragraphs' section, there is a mention of arrays of index numbers for the smaller parts that make up the object, e.g. tokens make up a phrase.

Looking at the raw JSON response, I do see raw index numbers. But when working with the actual object returned from the full_analysis() call, this object instead has an array of actual Token objects, not index numbers. Since the library essentially prevents you from working with the raw JSON (I had to hack things up to get it to save to a file; nothing I see lets you access the raw JSON yourself through an 'approved' method), this disconnect from what the documentation leads you to believe and what is actually possible ought to be fixed. To wit, the classes the library creates out of the JSON object ought to be documented along with traversal of that object (maybe I overlooked some help page...) I had to read the source code to understand what was going on.

Why do I care? I am saving all the returned data into a database and am not working with the data on the fly for various reasons. Saving an index number wold be nicer than deriving the index number of a token. I could do it by processing all the token objects first, etc and then look up the index number on demand - but that seems a bit of useless extra work given index numbers were available at some point.

I guess, overall, I would like to be able to deal with linking 'things' (token, sentence, phrase...) in the database as easily as possible. So... not sure if the library needs to change but a heads-up in the documentation about this would be nice.

HTTP Status Code 500

Hi,

I'm trying to perform some simple full analysis requests, but I'm consistently getting a 500 Error Code with a specific text sample. I have isolated the failing portion of the text as much as I can. Could please take a look?

Steps to reproduce (assuming EAI_USERNAME and EAI_PASSWORD correctly setup in environment):

correct_text = 'Las bicicletas son para el verano'
fail_text =  'momento y va a pasar", no, no, es que al momento entraba en otro contexto de otra familia que,'

from expertai.nlapi.cloud.client import ExpertAiClient
exp_client = ExpertAiClient()

exp_client.full_analysis(body={"document": {"text": correct_text}}, params={"language": "es"})
# <expertai.nlapi.common.model.data_model.DataModel at 0x7fd6cb62bfa0>

exp_client.full_analysis(body={"document": {"text": fail_text}}, params={"language": "es"})

Response status code: 500
---------------------------------------------------------------------------
ExpertAiRequestError                      Traceback (most recent call last)
Cell In[57], line 1
----> 1 exp_client.full_analysis(body={"document": {"text": 'momento y va a pasar", no, no, es que al momento entraba en otro contexto de otra familia que,'}}, params={"language": "es"})

File ~/anaconda3/envs/testenv/lib/python3.8/site-packages/expertai/nlapi/cloud/client.py:98, in ExpertAiClient.full_analysis(self, params, body)
     92 request = self.create_request(
     93     endpoint_path=constants.FULL_ANALYSIS_PATH,
     94     params=params,
     95     body=body,
     96 )
     97 response = self.response_class(response=request.send())
---> 98 return self.process_response(response)

File ~/anaconda3/envs/testenv/lib/python3.8/site-packages/expertai/nlapi/cloud/client.py:80, in ExpertAiClient.process_response(self, response)
     78 self._response = response
     79 if not response.successful:
---> 80     raise ExpertAiRequestError(
     81         "Response status code: {}".format(response.status_code)
     82     )
     83 elif response.bad_request:
     84     return ExpertAiRequestError(
     85         response.bad_request_message(response.json)
     86     )

ExpertAiRequestError: Response status code: 500

Environment info:

Linux OS
Python 3.8.16
expertai-nlapi==2.5.0

Best,
Guillermo.

binascii.Error: Incorrect padding

Hi when I try to analyse abstract text data which is from pubmed I get incorrect padding error. When I try to analyse smaller text it didn't give error, and the error comes from below line. Could you help me please?
output=output = client.full_analysis(body={"document": {"text": query_data[key]['abstract']}}, params={'language': language, 'resource': 'relevants'})

No option to specify proxies in the get request.

We are trying to access the api from behind a corporate proxy. We are receiving timeout errors. We'd like to set the proxy in and the cert in the get request but cannot see the syntax to do that. We can do the same from curl on a terminal and that is successful.

Error calling Detector API

document = client.detect(
		body={"document": {"text": text}}, 
		params={'language': language,'detector':'pii'})
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-24-329b1907077d> in <module>
      1 # cloud API
----> 2 pii_response = client.detect(
      3                 body={"document": {"text": text}},
      4 		params={'language': language,'detector':'pii'})

AttributeError: 'ExpertAiClient' object has no attribute 'detect'

HTTP Error 429

Hi, I'm trying to do threading but when I send request for 20 papers I'm getting Too Many Requests Error. I'm not using the free version of api. Is there any solution for that?
Thank you ๐Ÿ™

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.