Coder Social home page Coder Social logo

spice's People

Contributors

77wertfuzzy77 avatar dependabot[bot] avatar eshrh avatar getrektbyme avatar utagai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

spice's Issues

Library should not store credentials internally

API credentials are usually stored at the top level and are passed to every API call. This is especially confusing as you return the API credentials to the user and then also silently store them internally.

to_json() on anime object error

This is due to change in tag names (start and end dates) in official mal api. I'll add more details soon and also check if it affects manga object

Make README clearer on `.medium_list`

Issue #29 makes me think, upon looking at the README, that we are wrong/unclear about how to use a MediumList. We write the following in the README:

...
mlist.anime/manga_list, dictionary containing 5 keys whose values contain 5 lists according to the key:
...

But this is pretty misleading, since what we actually want is:

...
mlist.medium_list, dictionary containing 5 keys whose values contain 5 lists according to the key:
...

This is also partly my fault for not updating the source doc strings, found here. The change to using medium_list was done because there's no need to differentiate between medium lists.

In either case, its also worth considering just making MediumList objects iterable and save the hassle. Lots of this code was written when I was an infant python developer, and most likely, turning MediumList into a true list wrapper is probably the best play.

Uhhh... Other versions?

I have no idea how Ruby works, but I still agree how the official MAL API is hard to use.
I use the API in web development, (specifically Chrome Extensions), but I can't understand many parts of the documentation because of the lack of description, as well as the lack of things you can do.

If there could be an online version of this API (or whatever you do to make it online) so that people can access it via POST with the URL, that would be awesome. Then I can access it from my web app as well.

Issue with MediumList.extremes()

hey!

this should be a quick fix: MediumList.extremes()'s low value will practically always be 0, since it considers all anime in the list, meaning even plan-to-watch-anime, which by default have a score of 0.

My suggestion for this is to simply make extremes() consider only values in the completed list.

Thanks!

bs4.FeatureNotFound exception after installation

Hello!

It appears that immediately after installation on a fresh python 3 install (or in a fresh virtualenv, as in this case), not all of the requirements are installed via pip:

Traceback (most recent call last):
  File "./spice-test.py", line 7, in <module>
    results = spice.search("FLIP FLAPPERS", spice.get_medium('anime'), creds)
  File "/home/kuroshi/spice-test/lib/python3.5/site-packages/spice_api/spice.py", line 165, in search
    query_soup = BeautifulSoup(search_resp.text, 'lxml')
  File "/home/kuroshi/spice-test/lib/python3.5/site-packages/bs4/__init__.py", line 165, in __init__
    % ",".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?

Running pip install lxml solves the problem of course, but it might be something you want to add to the install_requires list in setup.py.

Anime searches silently fail freeze if the user forgot to load credentials

Anime searches silently fail if the user doesn't previously load credentials. The user has no way of knowing that credentials are needed for doing an anime search with the API. Possibly fail early before attempting to connect if credentials are not loaded. Or possibly fail when the query_soup variable returns very specifically "Invalid credentials".

Is it medium_list instead of anime/manga_list ?

According to README, I have to call mlist.anime/manga_list for getting an anime or manga list but I cannot do it. I have to use mlist.medium_list instead.

I got AttributeError when I called anime/manga_list.

This is intentional or just forgot to update README?
Sorry, If this is my wrong.

Can't use search() if you want to find "zankyou no teror"

C:\Users\Endrik\Google Drive\Python\MAL scraper>python getids.py "['darker than black','zankyou no teror','hellsing','berserk','death note','code geass','monster']"

Found id for 'darker than black': 2025. MAL title: 'Darker than Black: Kuro no Keiyakusha'

Traceback (most recent call last):
File "getids.py", line 16, in
results = spice.search(i,spice.get_medium("anime"), creds)
File "C:\Users\Endrik\AppData\Local\Programs\Python\Python35-32\lib\site-packages\spice_api\spice.py", line 169, in search
return helpers.reschedule(search, constants.DEFAULT_WAIT, query, medium)
File "C:\Users\Endrik\AppData\Local\Programs\Python\Python35-32\lib\site-packages\spice_api\helpers.py", line 106, in reschedule
return func(*args)
TypeError: search() missing 1 required positional argument: 'credentials'

Found id for 'hellsing': 270. MAL title: 'Hellsing'
Found id for 'berserk': 33. MAL title: 'Berserk'
Found id for 'death note': 1535. MAL title: 'Death Note'
Found id for 'code geass': 1575. MAL title: 'Code Geass: Hangyaku no Lelouch'
Found id for 'monster': 19. MAL title: 'Monster'
['2025', '270', '33', '1535', '1575', '19']

As you see the code finds one ID then says it has no credentials but then continues to use these credentials to find 5 more anime IDs

Mac OS : bs4.FeatureNotFound

Hey!

First off, I love this. Like, I LOVE THIS. So thanks!

Anyway, running python 3.6, on mac os High Sierra, BeautifulSoup throws FeatureNotFound, full error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spice_api/spice.py", line 165, in search
    query_soup = BeautifulSoup(search_resp.text, 'lxml')
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/bs4/__init__.py", line 165, in __init__
    % ",".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?

Evidently, lxml is missing, and everything works once lxml is installed with pip3 install lxml. It would be nice if lxml was included as a dependency when installing the package. I see this was once an issue, and seems to have been fixed in #19, but for whatever reason(possibly platform?) doesn't work. EDIT: somehow, setup.py doesn't include lxml?

I totally get it if you don't have the time to do this ( or just feeling lazy), so i'm trying to figure it out at the moment with a possible PR coming!

Thanks!
eshanmind

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.