Coder Social home page Coder Social logo

redbeat's People

Contributors

anton-petrov avatar aperezalbela avatar arthurzenika avatar az-wegift avatar chdsbd avatar cikay avatar concreted avatar cyberjunk avatar cysnake4713 avatar dawidzareba avatar elijahl avatar jezdez avatar jkseppan avatar kjlockhart avatar kkinder avatar kleschenko avatar michaelbukachi avatar miketheman avatar msdhupp avatar noamkush avatar sergeykosarchuk avatar sibson avatar spotlightkid avatar usrlocalben avatar while-loop avatar yogevyuval avatar zakird avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redbeat's Issues

example is needed

hi, i think this is a good project and want to use in my schedule job system.
however i had met some problems when i'm using it. can you offer an small and complete example project? I know how to set config and set schedule info and so on, but still can't run them successfully. please? thank you

RedBeatScheduler does not receive lock_timeout from app config

After updating exampleconf.py to include:

REDBEAT_LOCK_TIMEOUT = 15

Running PYTHONPATH=. celery beat --config exampleconf still shows the default lock_timeout:

(.venv) Arics-MacBook-Pro:redbeat aric$ PYTHONPATH=. celery beat --config exampleconf
celery beat v4.0.2 (latentcall) is starting.
__    -    ... __   -        _
LocalTime -> 2017-06-16 12:04:19
Configuration ->
    . broker -> redis://localhost:6379//
    . loader -> celery.loaders.default.Loader
    . scheduler -> redbeat.schedulers.RedBeatScheduler
       . redis -> redis://
       . lock -> `redbeat::lock` 25.00 minutes (1500s)
    . logfile -> [stderr]@%WARNING
    . maxinterval -> 5.00 seconds (5s)

Checking the redbeat::lock key in Redis, it shows a TTL of 1500s rather than 15.

RedBeat product tasks normal, but worker not consume

HI, I'm use celery4.0 and redbeat.

celeryconfig.py

from datetime import timedelta
from kombu import Queue

from main.const import ScheduleInterval

BROKER_URL = 'redis://192.168.1.166/0'

CELERY_RESULT_BACKEND = 'redis://192.168.1.166/0'

redbeat_redis_url = "redis://192.168.1.166/0"

CELERYBEAT_SCHEDULER = 'redbeat.RedBeatScheduler'

redbeat_lock_timeout = 300

CELERYBEAT_SCHEDULE = {
'publish': {
'task': 'main.tasks.cycle_publish', # notice that the complete name is needed
# 'task': 'main.tasks.add', # notice that the complete name is needed
'schedule': timedelta(seconds=ScheduleInterval.PUBLISH),
},
'free_worker': {
'task': 'main.tasks.cycle_worker',
'schedule': timedelta(seconds=ScheduleInterval.SCAN_TASK)
}
}

CELERY_QUEUES = (
Queue('default', routing_key='task.#'),
Queue('beater', routing_key='beater.#'),
Queue('exe_free', routing_key='free.#'),
Queue('exe_point', routing_key='point.#'),
Queue('exe_auto', routing_key='auto.#'),
Queue('publish', routing_key='auto.#'),
)

I stared beat with below command:

celery -A main beat -S redbeat.RedBeatScheduler --config main.celeryconfig --loglevel=info

and everything look normal.

[2017-10-25 17:27:27,201: INFO/MainProcess] Scheduler: Sending due task publish (main.tasks.cycle_publish)

Then I try to start the worker to consume tasks with below command:

celery worker -A main -l info

But it look started normal, but never to consume task

[tasks]
. main.tasks.auto_rotate
. main.tasks.cycle_publish
. main.tasks.cycle_worker
. main.tasks.end_auto_worker
. main.tasks.free_worker
. main.tasks.start_auto_worker

[2017-10-25 17:08:12,568: INFO/MainProcess] Connected to redis://192.168.1.166:6379/0
[2017-10-25 17:08:12,584: INFO/MainProcess] mingle: searching for neighbors
[2017-10-25 17:08:13,637: INFO/MainProcess] mingle: all alone
[2017-10-25 17:08:13,750: INFO/MainProcess] [email protected] ready.

rrule: first run delayed

@concreted : When I use recurrent rule, the first action isn't taken at time.

Example (as you can see, the first run is delayed from 21:20:03 to 21:19:58 without a clear reason):

test.py

from celery import Celery

app = Celery()

@app.task
def test():
    print('test')

run.py

from redbeat.schedules import rrule
from redbeat import RedBeatSchedulerEntry
from test import test, app

if __name__ == '__main__':
    schedule = rrule('MINUTELY', interval=3, count=4)
    print(schedule) # <rrule: freq: 5, dtstart: 2017-12-09 20:17:03.882+00:00, interval: 3, count: 4, ...>
    entry = RedBeatSchedulerEntry('test', test.name, schedule, app=app)
    entry.save()

out.log

[2017-12-09 21:19:58,320: INFO/MainProcess] Received task: test.test[7298979f-333e-4aae-b83d-f09cd7e2e7c0]
[2017-12-09 21:20:03,028: INFO/MainProcess] Received task: test.test[832d9876-8171-4c58-9835-ad5013e4b95d]
[2017-12-09 21:23:03,016: INFO/MainProcess] Received task: test.test[b7de89c3-c010-47de-a8c2-49350ff58050]
[2017-12-09 21:26:03,018: INFO/MainProcess] Received task: test.test[1050295c-0ea8-4824-89b2-ac71efa2c1ba]

how to add tasks?

py.test:

(app_env) RMBP➜ redbeat git:(master) ✗ >py.test tests
======================================== test session starts ========================================
platform darwin -- Python 2.7.10, pytest-3.0.3, py-1.4.31, pluggy-0.4.0
rootdir: /Users/michael/Development/redbeat, inifile:
plugins: catchlog-1.2.2, cov-2.3.1
collected 20 items

tests/test_entry.py ..........
tests/test_json.py ......
tests/test_scheduler.py ...
tests/test_utils.py .

celery beat:

[2016-10-10 01:15:23,716: INFO/MainProcess] Loading 0 tasks
[2016-10-10 01:15:28,711: INFO/MainProcess] Loading 0 tasks
[2016-10-10 01:15:33,707: INFO/MainProcess] Loading 0 tasks
[2016-10-10 01:15:38,698: INFO/MainProcess] Loading 0 tasks
[2016-10-10 01:15:43,691: INFO/MainProcess] Loading 0 tasks
[2016-10-10 01:15:48,684: INFO/MainProcess] Loading 0 tasks

Duplicate tasks

https://github.com/virusdefender/redbeat_test

app.conf.beat_schedule = {
    'task': {
        'task': 'app.tasks.task_run',
        'schedule': crontab(hour=8, minute=0)
    },
}

Set your computer's clock to 07:59:00 (UTC timezone)

Then start the worker and redbeat process

celery -A redbeat_test worker -l debug

celery beat -S redbeat.RedBeatScheduler -A redbeat_test -l debug

image

You will see a task is submitted when redbeat is started at 2018-02-01 07:59:41,550

PS:

  • Maybe you need to flush redis between tests
  • It seems that a fresh task with last_run_at == None enters maybe_xxx logic will cause problem

Crontab Jobs being run twice at expected time

I am trying to use redbeat with a test project as I am trying to integrate crontab jobs at specific times via Celery which is a core requirement of a project I am working on.

I liked the concept of storing these jobs in real time in Redis, as it offers a resilient way to ensure jobs stay in the scheduler and are picked up as configuration changes in near real time.

I am using the example configuration file from your project.

If I enter a job using the redbeat scheduler jobs are fired off twice at the expected time vs once:

interval = celery.schedules.crontab(minute=2, hour=14)
entry = RedBeatSchedulerEntry('test task 4', 'tasksalt.example', interval, args=['testarg1', 'testarg2'],app=app)
entry.save()

Beat shows:
[2018-05-16 10:01:55,672: INFO/MainProcess] Scheduler: Sending due task test task 4 (tasksalt.example)
[2018-05-16 10:01:55,705: DEBUG/MainProcess] tasksalt.example sent. id->2d404bc8-24d6-49c9-8312-62be48377e08
[2018-05-16 10:01:55,705: DEBUG/MainProcess] beat: Waking up in 4.33 seconds.
[2018-05-16 10:02:00,037: DEBUG/MainProcess] beat: Extending lock...
[2018-05-16 10:02:00,038: DEBUG/MainProcess] Selecting tasks
[2018-05-16 10:02:00,039: INFO/MainProcess] Loading 1 tasks
[2018-05-16 10:02:00,041: INFO/MainProcess] Scheduler: Sending due task test task 4 (tasksalt.example)
[2018-05-16 10:02:00,043: DEBUG/MainProcess] tasksalt.example sent. id->fe15ba6e-c29a-4756-82c6-d0cf96d032d2
[2018-05-16 10:02:00,043: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2018-05-16 10:02:05,049: DEBUG/MainProcess] beat: Extending lock...
[2018-05-16 10:02:05,050: DEBUG/MainProcess] Selecting tasks

Celery shows the job being executed twice. I've tried to change the pooling interval both longer and shorter and it makes no difference.

I also tried a once an hour job via crontab at a particular minute, which fired off twice, then ended up on a schedule of 1 time per hour after that. Is there some sort of initial evaluation / state that is being missed? It appears that the job may be ok after its first run.

Provide a way to reset a task when it failed

For some reason we have tasks in redbeat that are not running.

While debugging the reason why these tasks are not running, we would like to restart a task without waiting the time between tasks (in some of our use cases its 48 hours).

Looking at the ZRANGE redbeat::schedule we figured that that list would be consumed in order with the timestamp.

We tweaked something with ZINCRBY with a negative number which enabled use to force a task to run earlier than it was supposed to.

Then we wrote a small script to generate some ZADD XX commands. eg ZADD redbeat::schedule XX 1546968532 name-of-task. This approach doesn't seem to work. Any idea what we are doing wrong ? Is there a better way of doing this ?

Support for sentinel

Hi, we're using celery-redbeat on a redis server that will soon have a sentinel cluster around it and would love to have redbeat support for sentinel failover mechanisms (switch the redis connection to the new master when existing master falls).

For now, other parts our celery workers are just adding https://github.com/dealertrack/celery-redis-sentinel which works well.

We also have some redis client that use directly the sentinel support out of redis-py https://github.com/andymccurdy/redis-py#sentinel-support

Redbeat doesn't work with Celery `4.2.0rc1`

Redbeat doesn't work with Celery 4.2.0rc1

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/circleci/.local/share/virtualenvs/repo-eQF46Ow3/lib/python3.6/site-packages/_pytest/config.py", line 371, in _importconftest
    mod = conftestpath.pyimport()
  File "/home/circleci/.local/share/virtualenvs/repo-eQF46Ow3/lib/python3.6/site-packages/py/_path/local.py", line 668, in pyimport
    __import__(modname)
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
  File "/home/circleci/.local/share/virtualenvs/repo-eQF46Ow3/lib/python3.6/site-packages/_pytest/assertion/rewrite.py", line 213, in load_module
    py.builtin.exec_(co, mod.__dict__)
  File "/home/circleci/repo/tests/conftest.py", line 7, in <module>
    from myapp.app import create_app
  File "/home/circleci/repo/myapp/app.py", line 19, in <module>
    from myapp.settings import ProdConfig
  File "/home/circleci/repo/myapp/settings.py", line 7, in <module>
    from redbeat.decoder import RedBeatJSONDecoder
  File "/home/circleci/.local/share/virtualenvs/repo-eQF46Ow3/lib/python3.6/site-packages/redbeat/__init__.py", line 3, in <module>
    from .schedulers import RedBeatScheduler, RedBeatSchedulerEntry  # noqa
  File "/home/circleci/.local/share/virtualenvs/repo-eQF46Ow3/lib/python3.6/site-packages/redbeat/schedulers.py", line 39, in <module>
    CELERY_4_OR_GREATER = StrictVersion(celery_version) >= StrictVersion('4.0')
  File "/usr/local/lib/python3.6/distutils/version.py", line 40, in __init__
    self.parse(vstring)
  File "/usr/local/lib/python3.6/distutils/version.py", line 137, in parse
    raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number '4.2.0rc1'

similar problem from other project

numpy/numpy#7697

So I guess distutils.version.StrictVersion is not an ideal tool for determining the version of celery

lock cannot be disabled

Cannot disable lock.

Setting

redbeat_lock_key = None

is not working.

It uses "either_or" with a default value, so setting it None is useless

Redis Connection Retry

If Redis is temporary unavailbale, celery beat with RedbeatScheduler fails with Exceptions.
It would be better to implement celery beat default behaviour. Beat should try to connect Redis with increasing interval.

RedBeatSchedulerEntry won't works without specifying the app

Hi,

I'm playing with redbeat, but I faced to an error with a code similar to the tutorial code. Here the minimal code to reproduce :

tasks.py

from celery import Celery

app = Celery()
app.config_from_object('redbeatconf')

@app.task
def test(arg):
    print(arg)

redbeatconf.py

redbeat_redis_url = "redis://localhost:6379/1"
beat_scheduler = "redbeat.RedBeatScheduler"

test.py

from celery.schedules import schedule, crontab
from redbeat import RedBeatSchedulerEntry

if __name__ == '__main__':
    entry = RedBeatSchedulerEntry('say-hello', 'tasks.test',
                                  schedule(run_every=10), args=['hello world !'])
    entry.save()

I get this error :

$ python3 test.py
Traceback (most recent call last):
  File "test.py", line 10, in <module>
    entry.save()
  File "/usr/local/lib/python3.5/dist-packages/redbeat/schedulers.py", line 209, in save
    pipe.hset(self.key, 'definition', json.dumps(definition, cls=RedBeatJSONEncoder))
  File "/usr/local/lib/python3.5/dist-packages/redbeat/schedulers.py", line 188, in key
    return self.app.redbeat_conf.key_prefix + self.name
AttributeError: 'NoneType' object has no attribute 'redbeat_conf'

To avoid this error, I need (in test.py):

  1. to create or import a valid app
  2. to add app=app in the RedBeatSchedulerEntry arguments.

Is it normal ?

Thank you

Support for iCal recurrence rules (RRULE)

I would like to add support for schedules defined with RRules based on the iCal RFC (https://icalendar.org/iCalendar-RFC-5545/3-8-5-3-recurrence-rule.html). I think the scope of work involved would be:

  1. Implement a RRuleSchedule class that inherits from celery.schedules.BaseSchedule and implements remaining_estimate and is_due methods (using dateutil.rrule)
  2. Update RedBeatJSONDecoder and RedBeatJSONEncoder to handle the new class
  3. To handle RRules that end after a certain date/after a certain number of occurrences, add a way to mark schedules as 'inactive' after its last occurrence, so Redbeat will ignore them. Not sure exactly what this looks like yet

@sibson How does this sound to you? Does the scope of work look accurate or is there anything else that would be needed? Interested in your thoughts on how to best implement point 3.

I'm actively working on a project using Redbeat, and having RRULE support directly in Redbeat would make things easier for my use case (avoid having to convert RRULEs to cron timings, being able to let Redbeat take care of schedules that end after some time instead of adding logic externally). If you think this is worth doing in mainline Redbeat I can have a WIP PR later this week for you to look at.

AttributeError: 'PersistentScheduler' object has no attribute 'lock_key'

Hi,
A newbie here...I'm seeing the following error when trying to insert task direction into redis using zadd. Here is what I have

app = Flask(name)
app.config['broker_url'] = 'redis://localhost:6379/0'
app.config['result_backend'] = 'redis://localhost:6379/0'
app.config['redbeat_redis_url'] = "redis://localhost:6379/1"
app.config['CELERY_BEAT_SCHEDULER '] = 'RedBeatScheduler'
app.config['beat_max_loop_interval'] = 50
app.config['result_expires'] = 60
celery = Celery(app.name, broker=app.config['broker_url'])
celery.conf.update(app.config)
redis_url = 'redis://localhost:6379/0'
r = redis.Redis.from_url( redis_url )
score = utc_time.timestamp + 450
r.zadd('RedBeatScheduler', 'copdAlerts',score)

%celery worker -A test_v1.celery --loglevel=debug -E -B -S RedBeatScheduler

[2018-03-26 22:24:26,039: ERROR/Beat] Signal handler <function acquire_distributed_beat_lock at 0x7f80b0dd3c08> raised: AttributeError("'PersistentScheduler' object has no attribute 'lock_key'",)
Traceback (most recent call last):
File "/home/kvb/.virtualenvs/Notifications/local/lib/python2.7/site-packages/celery/utils/dispatch/signal.py", line 233, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/kvb/.virtualenvs/Notifications/local/lib/python2.7/site-packages/redbeat/schedulers.py", line 422, in acquire_distributed_beat_lock
AttributeError: 'PersistentScheduler' object has no attribute 'lock_key'

Not sure what else to debug or where to look...I checked the object doesnt have lock_key ....Any suggestions?

Exception raised when calling task.revoke()

We are getting exceptions in the celery-redbeat logs when using task.revoke():

[2016-08-11 15:10:55,512: INFO/MainProcess] beat: Starting...
[2016-08-11 15:10:55,537: CRITICAL/MainProcess] beat raised exception <type 'exceptions.AttributeError'>: AttributeError("'NoneType' object has no attribute 'now'",)
Traceback (most recent call last):
File ".../eggs/celery-3.1.18-py2.7.egg/celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 454, in start
humanize_seconds(self.scheduler.max_interval))
File ".../eggs/kombu-3.0.35-py2.7.egg/kombu/utils/init.py", line 325, in get
value = obj.dict[self.name] = self.*get(obj)
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 494, in scheduler
return self.get_scheduler()
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 489, in get_scheduler
lazy=lazy)
File ".../eggs/celery-3.1.18-py2.7.egg/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(_args, _kwargs)
File ".../eggs/celery_redbeat-0.9.2-py2.7.egg/redbeat/schedulers.py", line 187, in __init

super(RedBeatScheduler, self).init(app, *_kwargs)
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 185, in init
self.setup_schedule()
File ".../eggs/celery_redbeat-0.9.2-py2.7.egg/redbeat/schedulers.py", line 196, in setup_schedule
RedBeatSchedulerEntry(name).delete()
File ".../eggs/celery_redbeat-0.9.2-py2.7.egg/redbeat/schedulers.py", line 65, in init
args=args, kwargs=kwargs, *_clsargs)
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 94, in init
self.last_run_at = last_run_at or self._default_now()
File ".../eggs/celery-3.1.18-py2.7.egg/celery/beat.py", line 98, in _default_now
return self.schedule.now() if self.schedule else self.app.now()
AttributeError: 'NoneType' object has no attribute 'now'

Here is a patch that seemed to solve the problem:

--- redbeat/schedulers.py 2016-08-11 14:48:48.000000000 -0400
+++ redbeat/schedulers.py 2016-08-11 14:49:35.000000000 -0400
@@ -193,7 +193,7 @@
current = set(self.app.conf.CELERYBEAT_SCHEDULE.keys())
removed = previous - current
for name in removed:

  •        RedBeatSchedulerEntry(name).delete()
    
  •        RedBeatSchedulerEntry(name, app=self.app).delete()
    
     # setup statics
     self.install_default_entries(self.app.conf.CELERYBEAT_SCHEDULE)
    

Beat not picking up configuration on startup

I am using celery inside a docker container and I am using RedBeatScheduler. The command I use to start celery is

celery -E -A myapp.taskapp worker --beat --scheduler redbeat.schedulers:RedBeatScheduler --loglevel INFO --uid taskmaster --concurrency=5

But the problem is that during start beat gets the following error and stops working.

celeryissye

Based on stacktrace, it is clear that conf object's redis_url property is not set at startup but when I log in check it manually it seems to work and has the said property correctly set. I am unsure if the issue is from redbeat or celery itself.

Issues with Redis Cluster

Is saving/fetching jobs meant to work with clustered Redis? I get the following errors when pointing Redbeat to a cluster. The same code works fine with non-clustered Redis.

Saving a job:

  File "<Test Code>", line 26, in register_crontab_job
    RedBeatSchedulerEntry(name, task_name, cron, args=args, kwargs=kwargs, app=app).save()
  File "/usr/local/lib/python2.7/site-packages/redbeat/schedulers.py", line 211, in save
    pipe.execute()
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2626, in execute
    return execute(conn, stack, raise_on_error)
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2523, in _execute_transaction
    raise errors[0][1]
ResponseError: Command # 1 (HSET redbeat:foobar definition {"task": "echo", "name": "foobar", "schedule": {"hour": "*", "__type__": "crontab", "day_of_month": "*", "day_of_week": "*", "month_of_year": "*", "minute": "*"}, "args": ["foobar"], "enabled": true, "kwargs": null, "options": {}}) of pipeline caused error: MOVED 8184 <HOST>:6379

Fetching a job:

  File "<Test Code>", line 31, in get_periodic_job
    job = RedBeatSchedulerEntry.from_key(name, app=self.app)
  File "/usr/local/lib/python2.7/site-packages/redbeat/schedulers.py", line 157, in from_key
    definition, meta = pipe.execute()
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2626, in execute
    return execute(conn, stack, raise_on_error)
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2523, in _execute_transaction
    raise errors[0][1]
ResponseError: Command # 1 (HGET redbeat:foobar definition) of pipeline caused error: MOVED 8184 172.30.2.228:6379

Also, if I select a database in the redbeat_redis_url, I get this error:

  File "<Test Code>", line 26, in register_crontab_job
    RedBeatSchedulerEntry(name, task_name, cron, args=args, kwargs=kwargs, app=app).save()
  File "/usr/local/lib/python2.7/site-packages/redbeat/schedulers.py", line 211, in save
    pipe.execute()
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2626, in execute
    return execute(conn, stack, raise_on_error)
  File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2495, in _execute_transaction
    connection.send_packed_command(all_cmds)
  File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 538, in send_packed_command
    self.connect()
  File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 446, in connect
    self.on_connect()
  File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 520, in on_connect
    if nativestr(self.read_response()) != 'OK':
  File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 582, in read_response
    raise response
ResponseError: SELECT is not allowed in cluster mode

Clear tasks from Redis

Is there a way to flush all tasks from the redis queue, or maybe get all redbeat keys in order to programatically delete them?

can't upgrade to redis 3.0

Hi,

The new version of python redis (3.0) is not backward compatible, and upgrading cause problems.

>>> entry.save()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/redbeat/schedulers.py", line 274, in save
    pipe.zadd(self.app.redbeat_conf.schedule_key, self.score, self.key)
  File "/usr/local/lib/python3.5/dist-packages/redis/client.py", line 2263, in zadd
    for pair in iteritems(mapping):
  File "/usr/local/lib/python3.5/dist-packages/redis/_compat.py", line 123, in iteritems
    return iter(x.items())
AttributeError: 'int' object has no attribute 'items'

Support for Redis SSL connections

Does redbeat support SSL connections to redis?

In many cases such as Microsoft Azure's redis caches, SSL connections are required by default.
~ https://docs.microsoft.com/en-us/azure/redis-cache/cache-configure#access-ports

I noticed that Celery now has settings that allow SSL to be enabled but I can't seem to find anything in this repository that suggests that redbeat might support this type of connection.
~ http://docs.celeryproject.org/en/latest/userguide/configuration.html#redis-backend-use-ssl

Getting Error NoneType has no attribute 'redbeat_conf'?

I'm assuming it works on windows and there is something wrong with my script.
I'm also initialize the celery object from within the code. See below.

#Add to config
CELERY_ACCEPT_CONTENT = ['json', 'pickle']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACKS_LATE = True
CELERY_PREFETCH_MULTIPLIER = 1
CELERY_CREATE_MISSING_QUEUES = True
REDBEAT_REDIS_URL = "x"
REDBEAT_KEY_PREFIX = "readbeat"
REDBEAT_LOCK_TIMEOUT = 100
CELERYBEAT_SCHEDULER = 'redbeat.RedBeatScheduler'
CELERYBEAT_MAX_LOOP_INTERVAL = 5  # redbeat likes fast loops



app = Celery('test_celery',
            broker='x',
            backend='x',
            include=['test_celery.tasks'],
            redis_max_connections = 3,
		result_persistent = True,
		enable_ut = True,
		timezone = "UTC",
		redis_socket_timeout = 15 ,
            event_serializer='json', 
            task_serializer='json',
            acks_late=True,
            prefetch_multiplier=1,
            create_missing_queues=True,
            schedule=RedBeatScheduler,

)

#Tried this. No error but nothing happens. 
app.conf.beat_schedule = {
    'add-every-30-seconds': {
        'task': 'tasks.email_check',
        'schedule': 30.0,
        'args': None
    },
}

#This threw error. 
interval = celery.schedules.schedule(run_every=30)  # seconds
entry = RedBeatSchedulerEntry('checkemails-task', 'tasks.email_check', interval,)
entry.save()


Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "C:\Users\Casey\Envs\site-website\lib\site-packages\redbeat\schedulers.py", line 209, in save
    pipe.hset(self.key, 'definition', json.dumps(definition, cls=RedBeatJSONEncoder))
  File "C:\Users\Casey\Envs\site-website\lib\site-packages\redbeat\schedulers.py", line 188, in key
    return self.app.redbeat_conf.key_prefix + self.name
AttributeError: 'NoneType' object has no attribute 'redbeat_conf'

Launching the following without errors.

celery -A test_celery beat -S redbeat.RedBeatScheduler
celery -A test_celery worker -E -l INFO -n workerA --loglevel=DEBUG --concurrency=1 -Ofair -c 3 -Q high,default
celery -A test_celery worker -E -l INFO -n workerB --loglevel=DEBUG --concurrency=1 -Ofair -c 3 -Q low
celery -A test_celery flower --port=2525 --persistent=true

Publish version v0.13.0

I've noted that the setup.py had the version bumped to 0.13.0, with the fix supporting redis-py>3 (f59ac38 and c526983).

Are there any plans or obstacles to release a tag and publish this version? Any help needed?

Celery 4.* Compatibility

Currently, importing redbeat.RedBeatScheduler fails when using Celery 4.0.2, because celery.utils.timeutils has been renamed to celery.utils.time. Then, the default schedule is ignored, because the configuration key has been renamed from CELERYBEAT_SCHEDULE to beat_schedule. And that looks like just the tip of the iceberg.

Meanwhile, it may or may not be worth supporting multiple celery versions with a single codebase, but it's probably a good idea to document which versions are supported.

OverflowError('date value out of range',) when CELERY_TIMEZONE has negative UTC offset

This happens when a task is written into Redis and then on the next tick, when the task is loaded from Redis and .is_due() is called on the task:

{ "pid": 31792, "message": "Selecting tasks", "python_module": "redbeat.schedulers", "level": "DEBUG", "timestamp": "2016-08-12 09:24:11,855" }
{ "pid": 31792, "message": "Loading 1 tasks", "python_module": "redbeat.schedulers", "level": "INFO", "timestamp": "2016-08-12 09:24:11,856" }
{ "pid": 31792, "message": "Processing tasks", "python_module": "redbeat.schedulers", "level": "DEBUG", "timestamp": "2016-08-12 09:24:11,857" }
{ "pid": 31792, "message": "beat: Releasing Lock", "python_module": "redbeat.schedulers", "level": "DEBUG", "timestamp": "2016-08-12 09:24:11,870" }
{ "pid": 31792, "message": "beat raised exception <type 'exceptions.OverflowError'>: OverflowError('date value out of range',)", "python_module": "celery.beat", "level": "CRITICAL", "timesta
mp": "2016-08-12 09:24:11,872" }
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/apps/beat.py", line 112, in start_scheduler
    beat.start()
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/beat.py", line 463, in start
    interval = self.scheduler.tick()
  File "/usr/local/lib/python2.7/dist-packages/redbeat/schedulers.py", line 264, in tick
    return super(RedBeatScheduler, self).tick(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/beat.py", line 221, in tick
    next_time_to_run = self.maybe_due(entry, self.publisher)
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/beat.py", line 199, in maybe_due
    is_due, next_time_to_run = entry.is_due()
  File "/usr/local/lib/python2.7/dist-packages/redbeat/schedulers.py", line 175, in is_due
    return super(RedBeatSchedulerEntry, self).is_due()
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/beat.py", line 129, in is_due
    return self.schedule.is_due(self.last_run_at)
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/schedules.py", line 117, in is_due
    last_run_at = self.maybe_make_aware(last_run_at)
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/schedules.py", line 126, in maybe_make_aware
    return maybe_make_aware(dt, self.tz)
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/utils/timeutils.py", line 313, in maybe_make_aware
    dt, timezone.utc if tz is None else timezone.tz_or_local(tz),
  File "/usr/local/lib/python2.7/dist-packages/celery-3.1.17-py2.7.egg/celery/utils/timeutils.py", line 289, in localize
    dt = dt.astimezone(tz)
  File "/usr/local/lib/python2.7/dist-packages/pytz/tzinfo.py", line 187, in fromutc
    return (dt + inf[0]).replace(tzinfo=self._tzinfos[inf])
OverflowError: date value out of range

Analysis:

  • Happens only when CELERY_TIMEZONE is set to a timezone with a negative UTC offset (e.g. "America/Chicago" in our case
  • Task has never run before

When Redbeat loads a task from Redis (redBeatdSchedulerEntry.from_key) it tries to get the last_run_at attribute value from the meta dict in the Redis hashset of the task. When the task has never run, this is empty, so it sets last_run_at to datetime.min. Then in is_due (down the call chain), last_run_at is passed to celery.utils.timeutils.maybe_make_aware, which tries to convert the timestamp into a timezone-aware datetime object, using the scheduler timezone. This lead to the OverflowError, because you can't apply a negative offset to datetime.min.

Proposed solution:

If last_run_at is not set in the task meta info, set it to None and in RedBeatSchedulerEntry.is_due, if last_run_at is None, pass datetime.dtatime(datetime.MINYEAR, 1, 1, tzinfo=self.schedule.tz) to self.schedule.is_due. In RedBeatSchedulerEntry.is_at if last_run_at is None, return self._default_now().

I'll prepare a PR for this.

crontab type job getting picked up at beat startup

Hi,

I am trying to use redbeat for crontab type periodic jobs.

The problem I am facing is that, the job runs once when I start celery beat in a separate tab, and then it also runs at the scheduled time.

The second run is as expected, but not the first one. Could anyone help me out in this?

My celery.py is as follows,

CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_BROKER_URL = 'redis://{redis_host}:{redis_port}/{redis_db}'.format(
    redis_host=settings.REDIS_HOST, redis_port=settings.REDIS_PORT, redis_db=settings.REDIS_DB)
CELERY_RESULT_BACKEND = CELERY_BROKER_URL

app = Celery('tasks', broker=CELERY_BROKER_URL, backend=CELERY_RESULT_BACKEND, redbeat_redis_url=CELERY_BROKER_URL)

app.conf.update(
    accept_content=CELERY_ACCEPT_CONTENT,
    task_serializer=CELERY_TASK_SERIALIZER,
    result_serializer=CELERY_RESULT_SERIALIZER)

# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings')

# Load task modules from all registered Django app configs.
# This is not required, but as you can have more than one app
# with tasks it’s better to do the autoload than declaring all tasks
# in this same file.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from api.tasks import sample_add_task
    sender.add_periodic_task(
        **crontab(hour=15, minute=50)**,
        sample_add_task.s(15, 30),
        name='add everyday'
    )

sample_task.py:

from __future__ import absolute_import
from celery import task

@task
def sample_add_task(number1, number2):
    print('This is a sample add task for numbers: {0} and {1}'.format(number1, number2))
    return(number1 + number2)

Celery worker command: celery --app=app.celery worker --loglevel=INFO
Celery worker logs:

[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.000) SET SQL_AUTO_IS_NULL = 0; args=None
[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.000) SHOW FULL TABLES; args=None
[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.000) SELECT `django_migrations`.`app`, `django_migrations`.`name` FROM `django_migrations`; args=()
[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.000) SET SQL_AUTO_IS_NULL = 0; args=None
[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.001) SHOW FULL TABLES; args=None
[19/Jan/2018 15:51:25] DEBUG [django.db.backends:89] (0.000) SELECT `django_migrations`.`app`, `django_migrations`.`name` FROM `django_migrations`; args=()
[2018-01-19 15:51:27,571: INFO/MainProcess] Connected to redis://localhost:6379/4
[2018-01-19 15:51:27,577: INFO/MainProcess] mingle: searching for neighbors
[2018-01-19 15:51:28,589: INFO/MainProcess] mingle: all alone
[2018-01-19 15:51:28,595: INFO/MainProcess] celery@qplum-Precision-T1700 ready.
[2018-01-19 15:51:31,494: INFO/MainProcess] Events of group {task} enabled by remote.


[2018-01-19 15:51:46,508: INFO/MainProcess] Received task: api.tasks.sample_task.sample_add_task[15684a45-5f45-4070-a229-4331555e4863]  
[2018-01-19 15:51:46,510: WARNING/ForkPoolWorker-7] This is a sample add task for numbers: 15 and 30
[2018-01-19 15:51:46,512: INFO/ForkPoolWorker-7] Task api.tasks.sample_task.sample_add_task[15684a45-5f45-4070-a229-4331555e4863] succeeded in 0.0018646069802343845s: 45


[2018-01-19 15:55:00,117: INFO/MainProcess] Received task: api.tasks.sample_task.sample_add_task[299ce531-a877-4925-886f-bb512c7caa2c]  
[2018-01-19 15:55:00,121: WARNING/ForkPoolWorker-1] This is a sample add task for numbers: 15 and 30
[2018-01-19 15:55:00,123: INFO/ForkPoolWorker-1] Task api.tasks.sample_task.sample_add_task[299ce531-a877-4925-886f-bb512c7caa2c] succeeded in 0.002788302954286337s: 45

Celery beat command: celery beat -S redbeat.RedBeatScheduler --app=app.celery
Celery beat logs:

celery beat v4.1.0 (latentcall) is starting.
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.000) SET SQL_AUTO_IS_NULL = 0; args=None
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.030) SHOW FULL TABLES; args=None
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.000) SELECT `django_migrations`.`app`, `django_migrations`.`name` FROM `django_migrations`; args=()
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.000) SET SQL_AUTO_IS_NULL = 0; args=None
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.000) SHOW FULL TABLES; args=None
[19/Jan/2018 15:51:45] DEBUG [django.db.backends:89] (0.000) SELECT `django_migrations`.`app`, `django_migrations`.`name` FROM `django_migrations`; args=()
__    -    ... __   -        _
LocalTime -> 2018-01-19 15:51:46
Configuration ->
    . broker -> redis://localhost:6379/4
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> redbeat.schedulers.RedBeatScheduler
       . redis -> redis://localhost:6379/4
       . lock -> `redbeat::lock` 25.00 minutes (1500s)
    . logfile -> [stderr]@%WARNING
    . maxinterval -> 5.00 minutes (300s)

Redis data of the job:

After first run -

127.0.0.1:6379[4]> HGETALL "redbeat:add everyday"
1) "meta"
2) "{\"last_run_at\": {\"day\": 19, \"hour\": 15, \"__type__\": \"datetime\", \"year\": 2018, \"month\": 1, \"second\": 46, \"microsecond\": 497316, \"minute\": 51}, \"total_run_count\": 1}"
3) "definition"
4) "{\"enabled\": true, \"args\": [15, 30], \"schedule\": {\"month_of_year\": \"*\", \"hour\": 15, \"__type__\": \"crontab\", \"day_of_week\": \"*\", \"day_of_month\": \"*\", \"minute\": 55}, \"name\": \"add everyday\", \"kwargs\": {}, \"task\": \"api.tasks.sample_task.sample_add_task\", \"options\": {}}"

After second (scheduled) run -

127.0.0.1:6379[4]> HGETALL "redbeat:add everyday"
1) "meta"
2) "{\"last_run_at\": {\"day\": 19, \"hour\": 15, \"__type__\": \"datetime\", \"year\": 2018, \"month\": 1, \"second\": 0, \"microsecond\": 112246, \"minute\": 55}, \"total_run_count\": 2}"
3) "definition"
4) "{\"enabled\": true, \"args\": [15, 30], \"schedule\": {\"month_of_year\": \"*\", \"hour\": 15, \"__type__\": \"crontab\", \"day_of_week\": \"*\", \"day_of_month\": \"*\", \"minute\": 55}, \"name\": \"add everyday\", \"kwargs\": {}, \"task\": \"api.tasks.sample_task.sample_add_task\", \"options\": {}}"

Clearly, the first run is not at all at the scheduled time and got triggered just after beat startup!

Please note, this issue does not happen when I use standard PersistentScheduler with celery beat.

Lock disappearing?

Hello,

When I run two instances of celerybeat with redbeat scheduler, at first things work as expected -- one instance acquires the lock, and the other waits. After some random time though, the other instance starts working, even though the original instance thinks it still have the lock.

I've made following changes to debug this further:
mbarszcz@2347545

This is the output I'm getting on Worker A when things go well:

[2018-02-14 23:58:19,819: DEBUG/MainProcess] Setting default socket timeout to 30
[2018-02-14 23:58:19,819: INFO/MainProcess] beat: Starting...
[2018-02-14 23:58:19,826: DEBUG/MainProcess] Stored entry: <RedBeatSchedulerEntry: co
ntent.zip_test.zip_test() content.zip_test.zip_test() <freq: 1.00 minute>
[2018-02-14 23:58:19,827: DEBUG/MainProcess] beat: Ticking with max interval->5.00 se
conds
[2018-02-14 23:58:19,827: DEBUG/MainProcess] beat: Acquiring lock...
[2018-02-14 23:58:19,828: DEBUG/MainProcess] beat: Extending lock by 60 seconds...
[2018-02-14 23:58:19,828: DEBUG/MainProcess] Selecting tasks
[2018-02-14 23:58:19,829: INFO/MainProcess] Loading 0 tasks
[2018-02-14 23:58:19,829: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2018-02-14 23:58:24,832: DEBUG/MainProcess] beat: Synchronizing schedule...
[2018-02-14 23:58:24,832: DEBUG/MainProcess] beat: Extending lock by 60 seconds...
[2018-02-14 23:58:24,833: DEBUG/MainProcess] Selecting tasks
[...]

And output on Worker B looks like this:

[2018-02-15 00:00:20,724: DEBUG/MainProcess] Setting default socket timeout to 30
[2018-02-15 00:00:20,725: INFO/MainProcess] beat: Starting...
[2018-02-15 00:00:20,735: DEBUG/MainProcess] Stored entry: <RedBeatSchedulerEntry: content.zip_test.zip_test() content.zip_test.zip_test() <freq: 1.00 minute>
[2018-02-15 00:00:20,737: DEBUG/MainProcess] beat: Ticking with max interval->5.00 seconds
[2018-02-15 00:00:20,737: DEBUG/MainProcess] beat: Acquiring lock...

After some time though, Worker A reports:

[2018-02-15 00:23:20,183: DEBUG/MainProcess] beat: Extending lock by 60 seconds...
[2018-02-15 00:23:20,184: DEBUG/MainProcess] Selecting tasks
[2018-02-15 00:23:20,185: INFO/MainProcess] Loading 0 tasks
[2018-02-15 00:23:20,185: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2018-02-15 00:23:25,191: DEBUG/MainProcess] beat: Extending lock by 60 seconds...
[2018-02-15 00:23:25,192: WARNING/MainProcess] The key does not exist

And keeps running. Worker B also starts, because the lock is no longer there:

[2018-02-15 00:00:20,737: DEBUG/MainProcess] beat: Acquiring lock...
[2018-02-15 00:23:27,358: DEBUG/MainProcess] beat: Extending lock by 60 seconds...
[2018-02-15 00:23:27,359: DEBUG/MainProcess] Selecting tasks
[2018-02-15 00:23:27,361: INFO/MainProcess] Loading 0 tasks
[2018-02-15 00:23:27,361: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.

I can see how I could work around it in redbeat (try to acquire lock again after it disappears), but what might be causing the disappearing lock in the first place? Worker A prolonged it on 00:23:20 by 60 seconds, and 5 seconds later it's no longer there.

The underlying redis is 3.2.10 on AWS ElasticCache.

Redbeat scheduler - Execute same task at same time interval but for different inputs

We are creating scheduler entry for 3 different ID's using the below code.

interval = schedule(run_every=60)
RedBeatSchedulerEntry('{0}'.format(id), 'method_name', interval, kwargs=kwargs, app=app).save()

There are 3 different entries now, all of them with the same time interval to run every 60 seconds
Below are the observations:
--> Only the first task entry is getting executed every 60 secs
--> Other two never get executed
--> No exception or error seen for other entries

Excepted behavior :
All 3 tasks which were assigned to run every 60 seconds to execute same method should be executed

Documentation Improvement - REDBEAT_LOCK_TIMEOUT

REDBEAT_LOCK_TIMEOUT = CELERYBEAT_MAX_LOOP_INTERVAL * 5 needs to be explicitly set or it defaults to 25 minutes. In production this can cause a hang if the service crashes or has another issue where it does not clean itself up.

Appears to be related to this discussion: #50

Easiest if this were referenced in documents as a suggested config to avoid issues.

When starting, takes a long time for initial beat start. Hangs at "Acquiring lock..."

I'm using redbeat with AWS ElastiCache Redis.

When I deploy a new version, the beat service starts, prints the Configuration, shows the stored entries (when running with loglevel DEBUG), and then hangs at beat: Acquiring lock... for a loooong time, about 10-30 minutes. Once it proceeds, after that amount of delay the log says beat: Extending lock... and then immediate moves to Selecting tasks and proceeding.

I only need redbeat to have multiple celery-beat instances not trigger periodic jobs multiple times.

Wondering why this takes so long (even when there is only one beat instance starting), and what I could try.

Edit: I'm only seeing this issue after redeployments to our AWS infrastructure with ElastriCache, but cannot reproduce it in a local dev environment where Redis is cleared.

an error when starting beat

with Celery 4, an attempt to run a process

  File "/Users/chapkovski/mynewotree/lib/python3.5/site-packages/redbeat/__init__.py", line 3, in <module>
    from .schedulers import RedBeatScheduler, RedBeatSchedulerEntry  # noqa
  File "/Users/chapkovski/mynewotree/lib/python3.5/site-packages/redbeat/schedulers.py", line 19, in <module>
    from celery.utils.timeutils import humanize_seconds
ImportError: No module named 'celery.utils.timeutils'

timezone crontab task error

When I set the timezone to a non-UTC value, the loading task failed. The reason would be the MINYEAR is too far away from now, which cause the calculation of remaining time overflow.

def is_due(self):
if not self.enabled:
return False, 5.0 # 5 second delay for re-enable.

    return self.schedule.is_due(self.last_run_at or
                                datetime(MINYEAR, 1, 1, tzinfo=self.schedule.tz))

def remaining(start, ends_in, now=None, relative=False):
"""Calculate the remaining time for a start date and a timedelta.

For example, "how many seconds left for 30 seconds after start?"

Arguments:
    start (~datetime.datetime): Starting date.
    ends_in (~datetime.timedelta): The end delta.
    relative (bool): If enabled the end time will be calculated
        using :func:`delta_resolution` (i.e., rounded to the
        resolution of `ends_in`).
    now (Callable): Function returning the current time and date.
        Defaults to :func:`datetime.utcnow`.

Returns:
    ~datetime.timedelta: Remaining time.
"""
now = now or datetime.utcnow()
end_date = start + ends_in
if relative:
    end_date = delta_resolution(end_date, ends_in)
ret = end_date - now
if C_REMDEBUG:  # pragma: no cover
    print('rem: NOW:%r START:%r ENDS_IN:%r END_DATE:%s REM:%s' % (
        now, start, ends_in, end_date, ret))
return ret

hi there is some wrong,it is cant no load the “REDBEAT_REDIS_URL” correctly

celery beat v3.1.21 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://guest:**@localhost:5672//
. loader -> celery.loaders.default.Loader
. scheduler -> redbeat.schedulers.RedBeatScheduler
. redis -> None
. lock -> redbeat::lock now (0s)
. logfile -> [stderr]@%INFO
. maxinterval -> now (0s)
[2016-11-28 13:50:38,391: INFO/MainProcess] beat: Starting...
[2016-11-28 13:50:38,392: CRITICAL/MainProcess] beat raised exception <type 'exceptions.AttributeError'>: AttributeError("'NoneType' object has no attribute 'find'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 470, in start
humanize_seconds(self.scheduler.max_interval))
File "/usr/local/lib/python2.7/dist-packages/kombu/utils/init.py", line 325, in get
value = obj.dict[self.name] = self.__get(obj)
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 512, in scheduler
return self.get_scheduler()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 507, in get_scheduler
lazy=lazy)
File "/usr/local/lib/python2.7/dist-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/redbeat/schedulers.py", line 190, in init
super(RedBeatScheduler, self).init(app, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 185, in init
self.setup_schedule()
File "build/bdist.linux-x86_64/egg/redbeat/schedulers.py", line 194, in setup_schedule
client = redis(self.app)
File "build/bdist.linux-x86_64/egg/redbeat/schedulers.py", line 45, in redis
decode_responses=True)
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 391, in from_url
connection_pool = ConnectionPool.from_url(url, db=db, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 774, in from_url
url = urlparse(url)
File "/usr/lib/python2.7/urlparse.py", line 143, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
File "/usr/lib/python2.7/urlparse.py", line 182, in urlsplit
i = url.find(':')
AttributeError: 'NoneType' object has no attribute 'find'
————————————————————————————————————————
and the config
uc 20161128215301

Continuous testing?

I was wondering if you'd be interested in my contributing a Travis CI setup so the test suite can be run automatically when PRs come in?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.