Coder Social home page Coder Social logo

piejanssens / premiumizer Goto Github PK

View Code? Open in Web Editor NEW
172.0 172.0 45.0 4.88 MB

Download manager for premiumize.me cloud downloads

License: MIT License

Python 63.78% HTML 24.56% CSS 2.37% JavaScript 8.09% Smarty 0.86% Dockerfile 0.21% Shell 0.14%
cloud-downloads cloud-torrent premiumize-api torrent usenet

premiumizer's People

Contributors

chemsorly avatar daniel15 avatar danielfiniki avatar dependabot[bot] avatar digiltd avatar drmikecrowe avatar gersilex avatar h4r0 avatar jan-auer avatar jupiter avatar kungfoolfighting avatar leolobato avatar mooneye14 avatar neox387 avatar piejanssens avatar silberistgold avatar spoil001 avatar stphnrdmr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

premiumizer's Issues

Support for .magnet

Requested this & it's gonna be available in sonarr soon like it is in couchpotato:
If indexer doesn't return a .torrent file, (rarbg does this, normaly this would be converted through torcache, but that's allso down with kat.)

So when this happens the torrent magnet gets saved in a .magnet file, which we can also add directly to premiumizer.

so gonna add watchdir option to allso look for .magnet files.

Can't start premiumizer after pr#98

After updating premiumizer to to pr #98 I could no longer access it from the browser. My premiumizer.log was filling up with the following errors:

01-02 11:12:49: INFO : Running at /home/openhab/premiumizer
01-02 11:12:49: ERROR : Uncaught exception
Traceback (most recent call last):
  File "/home/openhab/premiumizer/premiumizer.py", line 334, in <module>
    cfg = PremConfig()
  File "/home/openhab/premiumizer/premiumizer.py", line 185, in __init__
    self.check_config()
  File "/home/openhab/premiumizer/premiumizer.py", line 213, in check_config
    self.aria2_enabled = prem_config.getboolean('downloads', 'aria2_enabled')
  File "/usr/lib/python2.7/ConfigParser.py", line 368, in getboolean
    v = self.get(section, option)
  File "/usr/lib/python2.7/ConfigParser.py", line 340, in get
    raise NoOptionError(option, section)
NoOptionError: No option 'aria2_enabled' in section: 'downloads'

To resolve the problem I had to add the following four lines to my settings.cfg:

aria2_enabled = 0
aria2_host = localhost
aria2_port = 6800
aria2_secret = premiumizer

My settings.cfg is now out-of-sync with settings.cfg.tpl. My config_version is 1.7 while the template is 1.8 and my req_version is 6.0 while the template is 6.3.

Sorry about the log display but I can't seem to get the code fence to work today.

Bandwith limitation internal downloader

First of all: thank you for this great software!

Would it be possible to add an option to limit the bandwith used for the internal downloader? I share the bandwith (home office) and a imitation would be very useful.

Can't boot due to auto update

After seeing there was an update available I turned on debug mode and rebooted premiumizer for a fresh start at updating from within the program. The reboot looks like it launched an auto update and this put premiumizer in an endless loop of a failing(?) update. The premiumizer web page is now unresponsive until I shutdown the process manually.

Performing a git pull and a restart of the premiumizer process gives me access to the home page but it is now displaying a notice " Download speed is limited to 50 kB/s".

01-03 20:08:39       root                                     : INFO     : Watchdir is enabled at: blackhole
01-03 20:08:39       root                                     : DEBUG    : Initializing config complete
01-03 20:08:39       root                                     : DEBUG    : Initializing Flask
01-03 20:08:39       engineio                                 : INFO     : Server initialized for gevent.
01-03 20:08:39       root                                     : DEBUG    : Initializing Flask complete
01-03 20:08:39       root                                     : DEBUG    : Initializing Database
01-03 20:08:39       root                                     : DEBUG    : Database cleared
01-03 20:08:39       root                                     : DEBUG    : Initializing Database complete
01-03 20:08:39       root                                     : INFO     : Starting server on 192.168.75.20:5000
01-03 20:08:39       root                                     : DEBUG    : def load_tasks started
01-03 20:08:39       apscheduler.scheduler                    : INFO     : Scheduler started
01-03 20:08:39       apscheduler.scheduler                    : INFO     : Added job "update" to job store "default"
01-03 20:08:39       apscheduler.scheduler                    : INFO     : Added job "check_update" to job store "default"
01-03 20:08:39       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:39       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2018-01-03 20:08:40.626359-05:00 (in 0.995657 seconds)
01-03 20:08:40       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:40       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2018-01-03 20:08:41.626359-05:00 (in 0.998953 seconds)
01-03 20:08:40       apscheduler.executors.default            : INFO     : Running job "check_update (trigger: interval[0:00:01], next run at: 2018-01-03 20:08:41 EST)" (scheduled at 2018-01-03 20:08:40.626359-05:00)
01-03 20:08:40       root                                     : DEBUG    : def check_update started
01-03 20:08:40       root                                     : DEBUG    : def update_self started
01-03 20:08:40       root                                     : INFO     : Update - will restart
01-03 20:08:40       apscheduler.scheduler                    : INFO     : Scheduler has been shut down
01-03 20:08:40       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:40       apscheduler.scheduler                    : DEBUG    : No jobs; waiting until a job is added
01-03 20:08:46       root                                     : DEBUG    : DEBUG Logfile Initialized
01-03 20:08:46       root                                     : INFO     : Running at /home/openhab/premiumizer
01-03 20:08:46       root                                     : DEBUG    : Initializing config
01-03 20:08:46       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:46       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "GET /my/connect?email=bjsjr%40hotmail.com&appkey=https%3A//git.io/vaDti&rid=1515028126115&signature=cc68b5c23c8b1f1a9c1ba042b1f40358b94dea33d49521cf3f294225e1aaae01 HTTP/1.1" 200 None
01-03 20:08:46       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:47       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "GET /my/listdevices?sessiontoken=eb66c51b41a225678f36870394c92e4f182bfaf8&rid=1515028126&signature=71aa8161db7ce1ec0945caf3c2ce7a5492680ba9ea855c97ec7339135c13e1dd HTTP/1.1" 200 None
01-03 20:08:47       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:48       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "POST /t_eb66c51b41a225678f36870394c92e4f182bfaf8_56a34b8b71d2b71f6de6c0b3625adac5/device/getDirectConnectionInfos HTTP/1.1" 200 None
01-03 20:08:48       urllib3.connectionpool                   : DEBUG    : Starting new HTTP connection (1): 172.17.0.10
01-03 20:08:51       urllib3.connectionpool                   : DEBUG    : Starting new HTTP connection (1): 127.0.0.1
01-03 20:08:51       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:52       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "POST /t_eb66c51b41a225678f36870394c92e4f182bfaf8_56a34b8b71d2b71f6de6c0b3625adac5/toolbar/getStatus HTTP/1.1" 200 None
01-03 20:08:52       root                                     : INFO     : Watchdir is enabled at: blackhole
01-03 20:08:52       root                                     : DEBUG    : Initializing config complete
01-03 20:08:52       root                                     : DEBUG    : Initializing Flask
01-03 20:08:52       engineio                                 : INFO     : Server initialized for gevent.
01-03 20:08:52       root                                     : DEBUG    : Initializing Flask complete
01-03 20:08:52       root                                     : DEBUG    : Initializing Database
01-03 20:08:52       root                                     : DEBUG    : Database cleared
01-03 20:08:52       root                                     : DEBUG    : Initializing Database complete
01-03 20:08:52       root                                     : INFO     : Starting server on 192.168.75.20:5000
01-03 20:08:52       root                                     : DEBUG    : def load_tasks started
01-03 20:08:52       apscheduler.scheduler                    : INFO     : Scheduler started
01-03 20:08:52       apscheduler.scheduler                    : INFO     : Added job "update" to job store "default"
01-03 20:08:52       apscheduler.scheduler                    : INFO     : Added job "check_update" to job store "default"
01-03 20:08:52       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:52       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2018-01-03 20:08:53.383311-05:00 (in 0.998279 seconds)
01-03 20:08:53       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:53       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2018-01-03 20:08:54.383311-05:00 (in 0.998467 seconds)
01-03 20:08:53       apscheduler.executors.default            : INFO     : Running job "check_update (trigger: interval[0:00:01], next run at: 2018-01-03 20:08:54 EST)" (scheduled at 2018-01-03 20:08:53.383311-05:00)
01-03 20:08:53       root                                     : DEBUG    : def check_update started
01-03 20:08:53       root                                     : DEBUG    : def update_self started
01-03 20:08:53       root                                     : INFO     : Update - will restart
01-03 20:08:53       apscheduler.scheduler                    : INFO     : Scheduler has been shut down
01-03 20:08:53       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
01-03 20:08:53       apscheduler.scheduler                    : DEBUG    : No jobs; waiting until a job is added
01-03 20:08:58       root                                     : DEBUG    : DEBUG Logfile Initialized
01-03 20:08:58       root                                     : INFO     : Running at /home/openhab/premiumizer
01-03 20:08:58       root                                     : DEBUG    : Initializing config
01-03 20:08:58       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:59       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "GET /my/connect?email=bjsjr%40hotmail.com&appkey=https%3A//git.io/vaDti&rid=1515028138859&signature=e7480b87342dc6fb27159d5ca83b8a40f4c2e91e8c5095e8fe82ac0fa80f41b4 HTTP/1.1" 200 None
01-03 20:08:59       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:08:59       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "GET /my/listdevices?sessiontoken=373b8eb27e44f73f24e24f928b14243957ba2b0d&rid=1515028139&signature=8f9450d381ef36720301d9781ec96bdb2bda0b79c756c5013a8fbf3a4e7b07d4 HTTP/1.1" 200 None
01-03 20:08:59       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:09:01       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "POST /t_373b8eb27e44f73f24e24f928b14243957ba2b0d_56a34b8b71d2b71f6de6c0b3625adac5/device/getDirectConnectionInfos HTTP/1.1" 200 None
01-03 20:09:01       urllib3.connectionpool                   : DEBUG    : Starting new HTTP connection (1): 172.17.0.10
01-03 20:09:04       urllib3.connectionpool                   : DEBUG    : Starting new HTTP connection (1): 127.0.0.1
01-03 20:09:04       urllib3.connectionpool                   : DEBUG    : Starting new HTTPS connection (1): api.jdownloader.org
01-03 20:09:05       urllib3.connectionpool                   : DEBUG    : https://api.jdownloader.org:443 "POST /t_373b8eb27e44f73f24e24f928b14243957ba2b0d_56a34b8b71d2b71f6de6c0b3625adac5/toolbar/getStatus HTTP/1.1" 200 None
01-03 20:09:05       root                                     : INFO     : Watchdir is enabled at: blackhole
0

premiumizer closing with error message segmentation fault 11

Hi there,
It started a few days ago that premiumizer shuts down because either it reports a "Segmentation fault 11" :

10-21 14:55:23: INFO : Added: The Office Season 9 720p BluRay x265 HEVC- FrogPerson -- Category: -- Type:
Segmentation fault: 11
(virtualenv) macmini-server:premiumizer admin$ ./premiumizer.py

or I am now getting the below error:

10-21 14:59:08: INFO : Running at /Users/admin/premiumizer
10-21 14:59:09: INFO : Watchdir is enabled at: /plex1/torrents/
10-21 14:59:09: INFO : Starting server on 192.168.1.2:5000
10-21 14:59:09: ERROR : Uncaught exception
Traceback (most recent call last):
File "./premiumizer.py", line 2177, in
load_tasks()
File "./premiumizer.py", line 1672, in load_tasks
task = db[id.encode("utf-8")]
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", line 121, in getitem
f = StringIO(self.dict[key])
KeyError: 'wnGEkdBPFVIBothDNrfYMA'

I already restarted the mac mini multiple times, shall I try to reinstall premiumizer from scratch? Thank you,

Aria2 event hooks

Any chance you could add supports for any of the event hooks available for aria2? Specifically, I would love support for --on-download-complete.

Perhaps it could be an option on the settings page. If ticked, it will add:

--on-download-complete on-download-complete.sh

to the command line. If not, you send the command line you're currently sending.

Internal Downloader

pySmartDL issues:

  1. error on this like pause/resume etc..
  2. downloading in chunks saves the chunks separately to the hard disk & then has to combine everything at the end
  3. pysmartdl using urllib2 instead of requests ?

I have not found another python downloader with the same functions, maybe pyload when 0.5 is released ..

Possible to download from "My Files"?

I like to fully automate RSS downloading:

  • RSS from RARBG => Torrent download to premiumize
  • As soon as finished they go to my files (/Feed Downloads/feed name/torrent name/)
  • Premiumizer should also download and delete them

Is there already an option to do so (which i didn't find...)? If not please add one, would be awesome!

Running it at startup on FreeBSD (probably works with Linux too)

On FreeBSD, you'll have a timezone problem. Fix:

vi /etc/login.conf

Under 'default:' add the following line:

:setenv=TZ=/America/Los_Angeles:\ (you can set it to whatever timezone you want)

Create a new file here:
vi /usr/local/etc/rc.d/premiumizer

Paste this:

#!/bin/sh
#
# $FreeBSD$
#

# PROVIDE: premiumizer
# REQUIRE: NETWORKING
# KEYWORD: shutdown

#
# Add the following lines to /etc/rc.conf to enable premiumizer:
#
# premiumizer_enable="YES"
#
# You can define flags for premiumizer running.
#

. /etc/rc.subr

name=premiumizer
rcvar=premiumizer_enable

command=/premiumizer/premiumizer.py

# read configuration and set defaults
load_rc_config $name
premiumizer_enable=${premiumizer_enable:-"NO"}

command_args=""

run_rc_command "$1"

Then, go to /etc/rc.conf and add this line:
premiumizer_enable="YES"

If it doesn't work due to Python env stuff, do this:

  1. which python
  2. Copy the output of that.
  3. Edit the premiumizer.py file and put that in the shebang in the very first line.

Download with JD always shows failed

I'm downloading from premiumizer with JD. The files get downloaded after assigning a category. But at the end I always get in the log following error:
ERROR : JD did not return package status for: "some download"

In the history tab therefore all download have the status failed. Is this a bug or could it be a missing setup on my side?

Really appreciate you help and work!

Order of download list rearranges a few seconds after initial list display

Hello!
I have the following issue:
Whenever I go to the home screen I am presented with a list of items that corresponds to the order of the list in the downloader view on premiumize.me. After a few seconds however, two older downloads are moved on top. This has already led to several accidental deletions of wrong files when this reordering happens just before I click to delete something.
If there is any information that I can supply to help you debug this, let me know!

Service not starting since commit 05e8865

I'll keep this brief because I couldn't find any useful log information:
Ever since commit 05e8865 "code cleanup" the service simply can't start anymore and throws an "cannot start" error if I try to do so manually via services.msc. Both manual or in-browser-upgrade doesn't work.

Operating System: Windows 10 Pro 64bit
Tried on two machines, Service is logged in to an administrator account.

The newest cloned repository works just fine when I replace the Premiumizer.py with Version 0fb481e or older.
Let me know if you need additional troubleshoot information.

Docker container image

Is there a way to use this with Docker? now Docker doesn't need an introduction but just let me say it streamlines the whole setup process and lets all services run in a sandbox environment which is super cool

Docker container - update problem: git fetch failed

Today I stumbled upon your nice little project.

I have installed the docker container from my synology nas.
I had to manually set the TZ environment variable, but then the container started and works properly.

The only problem I found, is the update mechanism.
I get the following messages during the startup of the process and when trying to update:

10-29 18:56:55: ERROR : Update failed: could not git fetch: /premiumizer
10-29 19:04:03: INFO : Settings saved, reloading configuration
10-29 19:04:03: ERROR : Update failed: could not git fetch: /premiumizer
10-29 19:05:22: INFO : Settings saved, reloading configuration
10-29 19:05:22: ERROR : Update failed: could not git fetch: /premiumizer

When I had a closer look at the container, it looks like git isn't installed, which would explain the error.

Is that correct or did I do something wrong during the installation of the container?

Shutdown/restart problem

I tried to use scheduler.shutdown(wait=False) & then sys.exit() but greenlets seem to not shutdown :(

& then it hangs until they are done so it can't restart while a greenlet is doing something like a download task(u shouldn't restart when downloading anyway but..)

or on shutdown main process is done so webpage is non responsive but greenlet is still going -.-

Bug with freeNas (freeBSD) and apscheduler

When running Premiumizer on freeNas 9.10 it crashes with the following stack trace:

Traceback (most recent call last):                                                                                                  
  File "./Premiumizer.py", line 1411, in <module>                                                                                   
    seconds=active_interval, replace_existing=True, max_instances=1, coalesce=True)                                                 
  File "/virtualenv/lib/python2.7/site-packages/apscheduler/schedulers/base.py", line 366, in add_job                               
    'trigger': self._create_trigger(trigger, trigger_args),                                                                         
  File "/virtualenv/lib/python2.7/site-packages/apscheduler/schedulers/base.py", line 848, in _create_trigger                       
    return self._create_plugin_instance('trigger', trigger, trigger_args)                                                           
  File "/virtualenv/lib/python2.7/site-packages/apscheduler/schedulers/base.py", line 833, in _create_plugin_instance               
    return plugin_cls(**constructor_kwargs)                                                                                         
  File "/virtualenv/lib/python2.7/site-packages/apscheduler/triggers/interval.py", line 37, in __init__                             
    self.timezone = astimezone(timezone)                                                                                            
  File "/virtualenv/lib/python2.7/site-packages/apscheduler/util.py", line 77, in astimezone                                        
    'Unable to determine the name of the local timezone -- you must explicitly '                                                    
ValueError: Unable to determine the name of the local timezone -- you must explicitly specify the name of the local timezone. Please
 refrain from using timezones like EST to prevent problems with daylight saving time. Instead, use a locale based timezone name (suc
h as Europe/Helsinki).`

This looks like an apscheduler bug but i am not sure.

Update:
Ok i found the reason, its a problem with tzlocal. Its unable to read the correct time zone on freeBSD systems.
There is a pull request here but it has not been accepted yet.

Unable to download seeding torrents

Downloading works fine with both torrent and nzb's

Issue: when I set the option 'Seed private tracker torrents' to enable i get the error 'check js console' when download is supposed to start.

Shelve DB

I'm seeing some stuff in the db while there are no downloads & db.keys() is empty, seems sometimes not everything gets deleted.

Synology package

Hey,

Just wondering if you could make the synology package :D

Would be nice if this would work from sonarr :)

Enhancement: request

I would like to see Real-debrid integration. It's a far cheaper alternative and I can't find any tool simillar to this for it.
Any chance of this in the future?

Watchdog gets unhandled exception

If somehow a filed gets created and deleted, while the MyHandler.process() function is running, an IOError exception gets raised and is not handled inside the watchdog eventhandler.
(In my case, sickrage tries to download a magnet, and then looks for the torrent on torrenproject. Apparently it opens a file, but fails to download. The file gets removed immediately, leading to the IOError.)

handler.dispatch(event)
File "/opt/premiumizer/virtualenv/lib/python2.7/site-packages/watchdog/events.py", line 454, in dispatch
    _method_map[event_type](event)
File "/opt/premiumizer/premiumizer.py", line 1166, in on_created
File "/opt/premiumizer/premiumizer.py", line 1136, in process
   hash, name = torrent_metainfo(watchdir_file)
File "/opt/premiumizer/premiumizer.py", line 1191, in torrent_metainfo
    info = metainfo['info']
 IOError: [Errno 2] No such file or directory: 'yyy/xxx.torrent'

Libs: Gevent - Flask-socketio

Would be nice to update to latest version v2?
https://flask-socketio.readthedocs.org/en/latest/

but dependencies etc have changed
python-socketio/engineio instead of gevent-socketio

I've tried this but doesn't work.

Allso to remove gevent this needs to change i think:
from apscheduler.schedulers.gevent import GeventScheduler

geventscheduler requires gevent package to be installed.

Error on Download

I get the following error and files are never downloaded:

3-18 22:45:13 apscheduler.executors.download : ERROR : Job "XXX (trigger: date[20
16-03-18 22:45:13 CET], next run at: 2016-03-18 22:45:13 CET)" raised an exception
Traceback (most recent call last):
File "/home/patrick/.local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 112, in run_job
retval = job.func(_job.args, *_job.kwargs)
File "Premiumizer.py", line 400, in download_task
failed = download_file(download_list)
File "Premiumizer.py", line 336, in download_file
downloader = SmartDL(download['url'], download['path'], progress_bar=False, logger=logger, threads_count=1)
TypeError: init() got an unexpected keyword argument 'threads_count'

Can't Download with JDownloader AND Aria2

I don't identify what could be the Problem with JDownloader. No Errors, nothing. It just don't put the Links into JDownloader.

With Aria2 its simple.
Following Errors in Log.

09-16 20:39:59: INFO : Running at /opt/premiumizer
09-16 20:39:59: ERROR : Could not connect to Aria2 RPC: http://localhost:6800/rpc --- message: global name 'cfg' is not defined
09-16 20:39:59: INFO : Starting server on 0.0.0.0:5000
09-16 20:40:12: ERROR : Error for magnet:?xt=urn:(download): Nothing to download .. Filtered out or bad torrent/nzb ?
09-16 20:40:12: WARNING : Retrying failed download in 10 minutes for: magnet:?xt=urn:(download)
09-16 20:49:44: INFO : Task: (download) -- Category set to: default
09-16 20:50:13: ERROR : Error for magnet:?xt=urn:(download )Nothing to download .. Filtered out or bad torrent/nzb ?
09-16 20:50:13: ERROR : Download failed for: magnet:?xt=urn:(download)
09-16 20:50:13: ERROR : Error for (download ): Nothing to download .. Filtered out or bad torrent/nzb ?
09-16 20:50:13: WARNING : Retrying failed download in 10 minutes for: (download)
09-16 20:50:33: INFO : Settings saved, reloading configuration
09-16 20:50:39: INFO : Settings saved, reloading configuration
09-16 20:50:39: ERROR : Could not connect to Aria2 RPC: http://localhost:6800/rpc --- message: unsupported operand type(s) for +: 'int' and 'str'

Aria Started with : aria2c --enable-rpc --rpc-allow-origin-all --rpc-listen-all --rpc-listen-port=6800 --rpc-secret=mypassword

Startup not possible anymore - DB_PAGE_NOTFOUND

Have the following issue, premiumizer doesn't start anymore (don't know what changed or what happened before excactly):

05-16 12:28:53: INFO : Running at C:\Premiumizer\Premiumizer
05-16 12:28:54: ERROR : Uncaught exception
Traceback (most recent call last):
File "C:\Premiumizer\Premiumizer\Premiumizer.py", line 500, in
if not db.keys():
File "C:\Premiumizer\Python\lib\shelve.py", line 101, in keys
return self.dict.keys()
File "C:\Premiumizer\Python\lib\bsddb_init_.py", line 303, in keys
return _DeadlockWrap(self.db.keys)
File "C:\Premiumizer\Python\lib\bsddb\dbutils.py", line 68, in DeadlockWrap
return function(*_args, **_kwargs)
DBPageNotFoundError: (-30986, 'DB_PAGE_NOTFOUND: Requested page not found')

Premiumizer cannot handle failed torrent downloads

Hello everyone,
I updated premiumizer today and after the update it was unable to get the list of files in the premiumize.me cloud storage.
I kept getting this message in the logs:
08-20 14:21:26: ERROR : premiumize.me connection error
The UI was telling me that i should check my premiumize.me login details.
So I turned on debugging and looked at the code to see what was happening.
As it turns out the code looks at the message it (successfully) retrieves from premiumize.me in line 907 of premiumizer.py and then checks to see if it contains the string '"status":"error"' and if so, it assumes that the connection failed.
Now as it turns out this string did occur in the message for every failed download in my cloud storage.
So whenever a torrent download failed in premiumize.me, i would not be able to look at the list of available cloud downloads.
I assume that this is a bug.
I am now working around this problem by removing all failed downloads from the cloud downloader.

Best regards!

Download list not showing

Hello,
I am currently having a problem, where I only see the infinitely spinning wheel when going to the 'Home' page.
The debug log shows a bunch of:

12-27 16:33:26       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
12-27 16:33:26       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2017-12-27 16:33:27.626472+01:00 (in 0.996570 seconds)
12-27 16:33:26       apscheduler.executors.default            : INFO     : Running job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:27 CET)" (scheduled at 2017-12-27 16:33:26.626472+01:00)
12-27 16:33:26       root                                     : DEBUG    : def update started
12-27 16:33:26       root                                     : DEBUG    : def prem_connection started
12-27 16:33:26       urllib3.connectionpool                   : DEBUG    : https://www.premiumize.me:443 "POST /api/transfer/list HTTP/1.1" 200 None
12-27 16:33:26       root                                     : DEBUG    : def parse_task started
12-27 16:33:26       apscheduler.executors.default            : ERROR    : Job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:27 CET)" raised an exception
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "premiumizer.py", line 1065, in update
    idle = parse_tasks(transfers)
  File "premiumizer.py", line 1091, in parse_tasks
    task = get_task(transfer['hash'].encode("utf-8"))
KeyError: 'hash'
12-27 16:33:27       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
12-27 16:33:27       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2017-12-27 16:33:28.626472+01:00 (in 0.996913 seconds)
12-27 16:33:27       apscheduler.executors.default            : INFO     : Running job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:28 CET)" (scheduled at 2017-12-27 16:33:27.626472+01:00)
12-27 16:33:27       root                                     : DEBUG    : def update started
12-27 16:33:27       root                                     : DEBUG    : def prem_connection started
12-27 16:33:27       urllib3.connectionpool                   : DEBUG    : https://www.premiumize.me:443 "POST /api/transfer/list HTTP/1.1" 200 None
12-27 16:33:27       root                                     : DEBUG    : def parse_task started
12-27 16:33:27       apscheduler.executors.default            : ERROR    : Job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:28 CET)" raised an exception
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "premiumizer.py", line 1065, in update
    idle = parse_tasks(transfers)
  File "premiumizer.py", line 1091, in parse_tasks
    task = get_task(transfer['hash'].encode("utf-8"))
KeyError: 'hash'
12-27 16:33:28       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
12-27 16:33:28       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2017-12-27 16:33:29.626472+01:00 (in 0.997096 seconds)
12-27 16:33:28       apscheduler.executors.default            : INFO     : Running job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:29 CET)" (scheduled at 2017-12-27 16:33:28.626472+01:00)
12-27 16:33:28       root                                     : DEBUG    : def update started
12-27 16:33:28       root                                     : DEBUG    : def prem_connection started
12-27 16:33:28       urllib3.connectionpool                   : DEBUG    : https://www.premiumize.me:443 "POST /api/transfer/list HTTP/1.1" 200 None
12-27 16:33:28       root                                     : DEBUG    : def parse_task started
12-27 16:33:28       apscheduler.executors.default            : ERROR    : Job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:29 CET)" raised an exception
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "premiumizer.py", line 1065, in update
    idle = parse_tasks(transfers)
  File "premiumizer.py", line 1091, in parse_tasks
    task = get_task(transfer['hash'].encode("utf-8"))
KeyError: 'hash'
12-27 16:33:29       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
12-27 16:33:29       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2017-12-27 16:33:30.626472+01:00 (in 0.997340 seconds)
12-27 16:33:29       apscheduler.executors.default            : INFO     : Running job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:30 CET)" (scheduled at 2017-12-27 16:33:29.626472+01:00)
12-27 16:33:29       root                                     : DEBUG    : def update started
12-27 16:33:29       root                                     : DEBUG    : def prem_connection started
12-27 16:33:29       urllib3.connectionpool                   : DEBUG    : https://www.premiumize.me:443 "POST /api/transfer/list HTTP/1.1" 200 None
12-27 16:33:29       root                                     : DEBUG    : def parse_task started
12-27 16:33:29       apscheduler.executors.default            : ERROR    : Job "update (trigger: interval[0:00:01], next run at: 2017-12-27 16:33:30 CET)" raised an exception
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "premiumizer.py", line 1065, in update
    idle = parse_tasks(transfers)
  File "premiumizer.py", line 1091, in parse_tasks
    task = get_task(transfer['hash'].encode("utf-8"))
KeyError: 'hash'

Which also repeats infinitely.

Any idea what is going on here?

Create premiumizer.db file if not present

I tested this on OSX according to the how-to:

12-18 16:23:11: ERROR : Uncaught exception
Traceback (most recent call last):
  File "./premiumizer.py", line 369, in <module>
    os.remove(os.path.join(runningdir, 'premiumizer.db'))

"touch premiumizer.db" fixed it

Can't download

Premiumizer has been working great for me for sometime but now all of a sudden it has stopped downloading. It still sends the magnet file to Premiumize.me and downloads into my cloud but premiumizer never downloads it to my local system. The Premiumizer home page loads but just displays "Loading Download Tasks" with the spinning gear.

After enabling Debug mode I found the following error over and over in the log:

11-20 17:52:24       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run
11-20 17:52:24       apscheduler.scheduler                    : DEBUG    : Next wakeup is due at 2018-11-20 17:52:25.534319-05:00 (in 0.998979 seconds)
11-20 17:52:24       apscheduler.executors.default            : INFO     : Running job "update (trigger: interval[0:00:01], next run at: 2018-11-20 17:52:25 EST)" (scheduled at 2018-11-20 17:52:24.534319-05:00)
11-20 17:52:24       root                                     : DEBUG    : def update started
11-20 17:52:24       root                                     : DEBUG    : def prem_connection started
11-20 17:52:24       urllib3.connectionpool                   : DEBUG    : https://www.premiumize.me:443 "POST /api/transfer/list HTTP/1.1" 200 None
11-20 17:52:24       root                                     : DEBUG    : def parse_task started
11-20 17:52:24       root                                     : DEBUG    : def get_task started
11-20 17:52:24       apscheduler.executors.default            : ERROR    : Job "update (trigger: interval[0:00:01], next run at: 2018-11-20 17:52:25 EST)" raised an exception
Traceback (most recent call last):
  File "/home/openhab/premiumizer/env/local/lib/python2.7/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "/home/openhab/premiumizer/premiumizer.py", line 1227, in update
    idle = parse_tasks(transfers)
  File "/home/openhab/premiumizer/premiumizer.py", line 1310, in parse_tasks
    speed=speed + ' --- ', eta=eta, folder_id=folder_id, file_id=file_id)
UnboundLocalError: local variable 'eta' referenced before assignment
11-20 17:52:25       apscheduler.scheduler                    : DEBUG    : Looking for jobs to run

Any suggestions as to what might be happening? It was working fine until last night, at least that's when I noticed it.

Thanks!

Premiumize API authentication not working

For the last couple of days premiumizer can not authenticate with the premiumize.me api. To me it looks like they made some undocumented changes that broke the authentication when using a POST with the authentication parameters in body. I can clearly reproduce this when sending a simple POST request to one of the api endpoints from the command line, everything works fine when using a GET and parameters in the url.
Anybody else experiencing the same?

Activate Seeding for Private Trackers

This is more of a question (a wiki would be nice!)

Does Premiumizer submit torrents with the Activate Seeding for Private Trackers function? Or can this be an option?

Add an option to download files without cat assignment

Currently its only possible to download files from the cloud which have an category assigned. It would be great if every finished download could be automatically downloaded and deleted. Additionally it would be nice to have an Buton in the webinterface "Download all".

Downloads are "hanging"

Thanks for this software, which is amazing when it works. Unfortunately, something does not always work for me, and that is downloading files from the cloud using the built-in downloader. I don't know if this is an appropriate "issue" for this place, but I'll just try it.

I run premiumizer on Linux, using a virtual environment. All the requirements are met, as pip tells me. I use premiumize only for torrents, to supplement my main nzb setup, which is of the sabnzbd/sickrage/couchpotato variety. So I don't have many files for which I use premiumizer, and I've only installed it yesterday too.

Everything else besides downloading works fine, I can access the web interface, change settings etc.; having couchpotato and sickrage add torrents to the cloud via premiumizer also works fine.

I've seen premiumizer download torrents from the cloud as expected, run nzbToMedia with good results, and autodelete the torrent from the cloud. In short, everything working as it should. But for several torrents now, I see downloads kind of stalling. Which also means nzbToMedia is not called, and processing does not go forward.

In these cases, the download begins, and the torrent is shown as being downloaded to the computer, but the download process is not completed. The download progress, shown in %, may remain at 0 % or may stall at 11,5 %, 99,5 %, 99,6 % or 99,9 %. Files are being downloaded, but sometimes not all of them. Let's say there is a mkv, a txt and a nfo file in the torrent; I've had the txt and nfo downloaded, but not the mkv, download progress was shown to be 0 %. Or, a torrent with 10 mkv files. 2 mkv files are downloaded to the correct directory on my hard disk (complete files, not partial ones), but the rest are not. Download progress stalls at 10,8 %.

Another example: Torrent with main mkv, sample mkv and txt. Main mkv and txt are downloaded, sample mkv is not (not downloading samples is what I have in the category settings, so it is expected behavior). Download progress is shown to be 0 bytes of 1.42 GB. And one last example: Torrent has main avi, sample avi, one nfo and one txt file. nfo and txt are downloaded, the avis are not downloaded. Download progress is shown as 0 bytes of 1.37 GB.

Waiting does not help, the stalled downloads will not go forward. The only option is to stop the downloads.

Any idea what is the problem here? I've looked at the logs and debug logs, but I see nothing there that could explain that behavior. I could supply some if you think theyre'helpful.

Replace "+" with spaces

When I search for a movie with radarr, it looks for the original name in the download directory
Report sent to premiumizer. Guardians of the Galaxy (2014)
When premumizer downloads the movie it replaces all spaces with "+"
Downloading: Guardians+of+the+Galaxy+(2014)
Radarr looks for the original filename it sent to premiumizer and not the one with the plus sings.
If I rename the folder to the right name, the movie appears in the activity tab from radarr and gets imported.

Could you add an option to replace "+" with spaces?

Thanks for the awesome software.

Initial settings file incompatible

When I start a completely new instance of premiumizer by simply downloading the git repo and starting it, the program stops with this message:

01-03 16:33:08: INFO : Running at /premiumizer
01-03 16:33:08: ERROR : Uncaught exception
Traceback (most recent call last):
  File "premiumizer.py", line 334, in <module>
    cfg = PremConfig()
  File "premiumizer.py", line 185, in __init__
    self.check_config()
  File "premiumizer.py", line 298, in check_config
    cat_dir = os.path.join(self.download_location, y)
AttributeError: PremConfig instance has no attribute 'download_location'

The settings entry in questions looks like this on my machine:

download_location =
This can obviously be prevented by using a settings file where the download directory is set, but it should work out of the box for initial configuration.

Upload torrent control.js

Some torrents do not have MIME set to application/x-bittorrent, so it's not detecting it as a torrent,
would need to change this to just look at the file extension:

$('#torrent-file-upload').on('change', function (e) {
    e.preventDefault();
    var files = $(this).prop('files');
    if (files.length > 0) {
        var file = files[0];
        if (file.type == 'application/x-bittorrent') {
            uploadTorrent(file);
        } else {
            alert('Nope, not a torrent file...');
        }
    }
    $('#torrent-file-upload[type="file"]').val(null);
})

I tried with stuff from:
http://stackoverflow.com/questions/190852/how-can-i-get-file-extensions-with-javascript

but can't get it to work.

Issue with Blackhole

premiumizer stops working after around 12-24h. I have to restart it so it re-scans the folder.
What do you need for debugging?

Updating to latest version failed

Hi, I was running Update from the web interface but terminal presents me with the below after it didn't come back:

10-13 15:06:46: INFO : Starting server on 192.168.1.2:5000
Updating 55b9ba8..bf5673f
error: Your local changes to the following files would be overwritten by merge:
premiumizer.py
requirements.txt
settings.cfg.tpl
templates/index.html
Please commit your changes or stash them before you merge.
Aborting

thank you

Feature request

It would be good if we could auto downloaded items that are downloaded via Premiumiz.me RSS feed.

i.e P.me auto downloads via it's RSS and Premiumizer then watches the folder and downloads to local.
Then ideally deletes the file a day later so stop P.me redownloading it.

Yay? or Nay?

Startup fails because of missing setting

After the latest update (#119) my Premiumizer doesn't start any more. This is the log output:

06-25 08:29:19: ERROR : Uncaught exception
Traceback (most recent call last):
  File "C:\Diverses\Premiumizer\Premiumizer\Premiumizer.py", line 347, in <module>
    cfg = PremConfig()
  File "C:\Diverses\Premiumizer\Premiumizer\Premiumizer.py", line 186, in __init__
    self.check_config()
  File "C:\Diverses\Premiumizer\Premiumizer\Premiumizer.py", line 208, in check_config
    self.remove_cloud_delay = prem_config.getint('downloads', 'remove_cloud_delay')
  File "C:\Diverses\Premiumizer\Python\lib\ConfigParser.py", line 359, in getint
    return self._get(section, int, option)
  File "C:\Diverses\Premiumizer\Python\lib\ConfigParser.py", line 356, in _get
    return conv(self.get(section, option))
  File "C:\Diverses\Premiumizer\Python\lib\ConfigParser.py", line 340, in get
    raise NoOptionError(option, section)
NoOptionError: No option 'remove_cloud_delay' in section: 'downloads'

After I added "remove_cloud_delay" to the settings file (copied from template settings file) the same error happened for "download_rss". After I added both, everything seems to be working again.

License?

Are you planning on releasing this software under a free license like MIT or GPL?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.