Coder Social home page Coder Social logo

resque-loner's Introduction

Resque-Loner

Build Status Gem Version

Resque-Loner is a plugin for defunkt/resque which adds unique jobs to resque: Only one job with the same payload per queue.

Installation

First install the gem:

$ gem install resque-loner 

Then include it in your app:

require 'resque-loner'

Tests

To make sure this plugin works on your installation, you should run the tests. resque-loner is tested in RSpec, but it also includes resque's original testsuite. You can run all tests specific to resque-loner with rake spec.

To make sure the plugin did not break resque, you can run rake test (the standard resque test suite). This runs all tests from the 1.22.0 version of resque, so make sure you have that version of resque installed, when you run the resque-tests.

Example

Unique jobs can be useful in situations where running the same job multiple times issues the same results. Let's say you have a job called CacheSweeper that refreshes some cache. A user has edited some_article, so you put a job on the queue to refresh the cache for that article.

>> Resque.enqueue CacheSweeper, some_article.id
=> "OK"

Your queue is really full, so the job does not get executed right away. But the user editing the article has noticed another error, and updates the article again, and your app kindly queues another job to update that article's cache.

>> Resque.enqueue CacheSweeper, some_article.id
=> "OK"

At this point you will have two jobs in the queue, the second of which has no effect: You don't have to run it, once the cache has been updated for the first time. This is where resque-loner's UniqueJobs come in. If you define CacheSweeper like this:

class CacheSweeper
  include Resque::Plugins::UniqueJob
  @queue = :cache_sweeps

  def self.perform(article_id)
    # Cache Me If You Can...
  end
end

Just like that you've assured that on the :cache_sweeps queue, there can only be one CacheSweeper job for each article. Let's see what happens when you try to enqueue a couple of these jobs now:

>> Resque.enqueue CacheSweeper, 1
=> "OK"
>> Resque.enqueue CacheSweeper, 1
=> "EXISTED"
>> Resque.enqueue CacheSweeper, 1
=> "EXISTED"
>> Resque.size :cache_sweeps
=> 1

Since resque-loner keeps track of which jobs are queued in a way that allows for finding jobs very quickly, you can also query if a job is currently in a queue:

>> Resque.enqueue CacheSweeper, 1
=> "OK"
>> Resque.enqueued? CacheSweeper, 1
=> true
>> Resque.enqueued? CacheSweeper, 2
=> false
>> Resque.enqueued_in? :another_queue, CacheSweeper, 1
=> false

How it works

Keeping track of queued unique jobs

For each created UniqueJob, resque-loner sets a redis key to 1. This key remains set until the job has either been fetched from the queue or destroyed through the Resque::Job.destroy method. As long as the key is set, the job is considered queued and consequent queue adds are being rejected.

Here's how these keys are constructed:

resque:loners:queue:cache_sweeps:job:5ac5a005253450606aa9bc3b3d52ea5b
|          |        |                |
|          |        |                `---- Job's ID (#redis_key method)
|          |        `--------------------- Name of the queue
|          `------------------------------ Prefix for this plugin
`----------------------------------------- Your redis namespace

The last part of this key is the job's ID, which is pretty much your queue item's payload. For our CacheSweeper job, the payload would be:

{ 'class': 'CacheSweeper', 'args': [1] }`

The default method to create a job ID from these parameters is to do some normalization on the payload and then md5'ing it (defined in Resque::Plugins::UniqueJob#redis_key).

You could also use the whole payload or anything else as a redis key, as long as you make sure these requirements are met:

  1. Two jobs of the same class with the same parameters/arguments/workload must produce the same redis_key
  2. Two jobs with either a different class or different parameters/arguments/workloads must not produce the same redis key
  3. The key must not be binary, because this restriction applies to redis keys: Keys are not binary safe strings in Redis, but just strings not containing a space or a newline character. For instance "foo" or "123456789" or "foo_bar" are valid keys, while "hello world" or "hello\n" are not. (see http://code.google.com/p/redis/wiki/IntroductionToRedisDataTypes)

So when your job overwrites the #redis_key method, make sure these requirements are met. And all should be good.

Resque integration

Unfortunately not everything could be done as a plugin, so I overwrote three methods of Resque::Job: create, reserve and destroy (I found no hooks for these events). All the logic is in module Resque::Plugins::Loner though, so it should be fairly easy to make this a pure plugin once the hooks are known.

resque-loner's People

Contributors

aerodynamik avatar arthurdandrea avatar bogdan avatar hakanensari avatar jayniz avatar makrsmark avatar mateusdelbianco avatar meatballhat avatar mishina2228 avatar ryansch avatar unclebilly avatar zmoazeni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resque-loner's Issues

resque-loner and rspec tests

I'm noticing that the unique keys used to identify jobs are not being removed when running rspec in my app for things that are using resque. I've turned on inline jobs for rspec, which is not supposed to queue them up. Does resque-loner work with inline jobs?

.gitignore: ripgrep: error parsing glob

Ripgrep logs the following:

vendor/ruby/2.5.0/gems/resque-loner-1.3.0/.gitignore: line 2: error parsing glob '**.DS_Store': invalid use of **; must be one path component
vendor/ruby/2.5.0/gems/resque-loner-1.3.0/.gitignore: line 3: error parsing glob '**.swp': invalid use of **; must be one path component

Specify job params to qualify 'uniqueness'

I have not yet looked into how it might be done, but I would like to be able to either specify job parameters that I want qualified as a unique job, or specify parameters to not compare when checking for a unique job.

The main reason I would like this is I would like to add a requested timestamp to my jobs but doing so makes every job unique.

I plan to look into doing this sooner or later but wanted to make a ticket for it in case anyone had suggestions for alternative approaches etc..

Can't get v1.2.0

Hi,

I saw in the changelog that there was a 1.1.0 and even a 1.2.0, but I can't manage to install it. It seems that these versions are not released (https://rubygems.org/gems/resque-loner) or am I totally mistaking?

If they are indeed not released, do you have an ETA? It seems that resque-loner is causing some issues with my app, possibly linked with the fix in 1.1.0 (problem with enqueue_to)

Thanks !

Where are the locks put in redis?

I'm looking for the redis Key-Value pairs that are the locks. I'm using this piece of code to find them keys = Resque.redis.keys("loners:queue:MY_QUEUE:job:*"), but I never see any locks. Is this correct?

ETIMEDOUT on job without Loner include

I'm getting a frequent redis timeout using resque against my own redis instance. This only happens on a couple jobs, neither of which have the explicit Loner include, but because of the way Loner patches Job, it runs the .multi for every Job create. The job itself waits for quite a while for a big initial query to run, so I assume thats where the timeout is coming from. I believe it has something to do with the explicit Resque.redis.multi not handling a reconnect.

Any good ideas on how to work around this are welcomed.

Errno::ETIMEDOUT
Error: Connection timed out
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/connection/ruby.rb:232:in `write'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/connection/ruby.rb:232:in `write'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:217:in `block in write'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:202:in `io'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:216:in `write'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:180:in `block (3 levels) in process'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:174:in `each'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:174:in `block (2 levels) in process'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:287:in `ensure_connected'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:173:in `block in process'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:248:in `logging'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:172:in `process'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:147:in `call_pipelined'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:121:in `block in call_pipeline'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:235:in `with_reconnect'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis/client.rb:119:in `call_pipeline'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis.rb:2020:in `block in multi'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis.rb:36:in `block in synchronize'
/usr/local/rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis.rb:36:in `synchronize'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-3.0.2/lib/redis.rb:2012:in `multi'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/redis-namespace-1.2.1/lib/redis/namespace.rb:220:in `method_missing'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/resque-loner-1.2.1/lib/resque-ext/job.rb:19:in `create_with_loner'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/resque-1.23.0/lib/resque.rb:240:in `enqueue_to'
/mnt/app-production/shared/bundle/ruby/1.9.1/gems/resque-1.23.0/lib/resque.rb:221:in `enqueue'
[...sip...]
/mnt/app-production/[redacted]worker.rb:11:in `perform'

Compatible with resque 1.24.1?

The gem doesn't seem to work with resque 1.24.1, is it known to? I'm having trouble getting it work with that version and I downgraded and tried 1.23, I followed the documentation.. not sure what I'm doing wrong. Are there compatibility issues with other plugins that are known about?

Does not work with rescue-cleaner

Sometimes Resque.redis.multi seems to return a <Redis::Future [:multi]> instead of an array, in which case Resque.redis.multi.first raises this error.

It was introduced here it seems:

4ae90ae

#<NoMethodError: undefined method `first' for nil:NilClass>

/usr/local/rvm/gems/ruby-2.1.1/gems/resque-loner-1.2.1/lib/resque-ext/job.rb in create_with_loner
      end.first
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in block (3 levels) in requeue
                Job.create(queue||job['queue'], job['payload']['class'], *job['payload']['args'])
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-namespace-1.4.1/lib/redis/namespace.rb in block in namespaced_block
        yield self
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-3.0.7/lib/redis.rb in block in multi
          yield(self)
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-3.0.7/lib/redis.rb in block in synchronize
    mon_synchronize { yield(@client) }
/usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/monitor.rb in mon_synchronize
      yield
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-3.0.7/lib/redis.rb in synchronize
    mon_synchronize { yield(@client) }
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-3.0.7/lib/redis.rb in multi
    synchronize do |client|
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-namespace-1.4.1/lib/redis/namespace.rb in namespaced_block
      result = redis.send(command) do |r|
/usr/local/rvm/gems/ruby-2.1.1/gems/redis-namespace-1.4.1/lib/redis/namespace.rb in multi
        namespaced_block(:multi, &block)
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in block (2 levels) in requeue
              redis.multi do
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in each
          @limiter.jobs.each_with_index do |job,i|
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in each_with_index
          @limiter.jobs.each_with_index do |job,i|
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in block in requeue
          @limiter.jobs.each_with_index do |job,i|
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in lock
          yield
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner.rb in requeue
        @limiter.lock do
/usr/local/rvm/gems/ruby-2.1.1/gems/resque-cleaner-0.3.0/lib/resque_cleaner/server.rb in block (2 levels) in included
            when "retry_and_clear" then cleaner.requeue(true,&block)

Enqueue fails / Conflict with redis_namespace 1.3.1

With resque-loner 1.2.1 and redis-namespace 1.3.1 I get an error when enqueueing jobs. On every job, every class, no matter if resque-loner is actually being used or not.

I created a minimized example to test this at: https://github.com/ticktricktrack/resque_loner_namespace_conflict

With redis-namespace 1.3.0 things works fine.

Resque.enqueue(Dummy)
NoMethodError: undefined method `is_a?' for #<Redis::Future:0x007fb846086668>
  from ruby_path/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:320:in `method_missing'
  from ruby_path/gems/resque-1.23.0/lib/resque.rb:196:in `watch_queue'
  from ruby_path/gems/resque-1.23.0/lib/resque.rb:141:in `push'
  from ruby_path/gems/resque-1.23.0/lib/resque/job.rb:51:in `create'
  from ruby_path/gems/resque-loner-1.2.1/lib/resque-ext/job.rb:20:in `block in create_with_loner'
  from ruby_path/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:339:in `block in namespaced_block'
  from ruby_path/gems/redis-3.0.2/lib/redis.rb:2019:in `block in multi'
  from ruby_path/gems/redis-3.0.2/lib/redis.rb:36:in `block in synchronize'
  from ruby_pathruby/1.9.1/monitor.rb:211:in `mon_synchronize'
  from ruby_path/gems/redis-3.0.2/lib/redis.rb:36:in `synchronize'
  from ruby_path/gems/redis-3.0.2/lib/redis.rb:2012:in `multi'
  from ruby_path/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:337:in `namespaced_block'
  from ruby_path/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:224:in `multi'
  from ruby_path/gems/resque-loner-1.2.1/lib/resque-ext/job.rb:19:in `create_with_loner'
  from ruby_path/gems/resque-1.23.0/lib/resque.rb:240:in `enqueue_to'
  from ruby_path/gems/resque-1.23.0/lib/resque.rb:221:in `enqueue'
  from (irb):1
  from ruby_path/gems/railties-3.2.12/lib/rails/commands/console.rb:47:in `start'
  from ruby_path/gems/railties-3.2.12/lib/rails/commands/console.rb:8:in `start'
  from ruby_path/gems/railties-3.2.12/lib/rails/commands.rb:41:in `<top (required)>'
  from script/rails:6:in `require'
  from script/rails:6:in `<main>'1.9.3p392 :002 > 

My Gemfile

gem 'redis', '3.0.2'
gem 'resque', '1.23.0', :require => 'resque/server'

# gem 'redis-namespace', '1.3.0'
gem 'redis-namespace', '1.3.1'

gem 'resque-loner', '1.2.1'

redis_key does not return unique key

I am relying on the default redis_key method in the 1.2.1 version to produce a unique key when the args is the same. In my case, the args is simply a single numeric id, such as 347. Poking into the code I realized that redis_key returns "Digest::MD5.hexdigest encode(:class => job, :args => args)". The problem with this is that the encode method would return a string where sometimes the :args parameter comes first, and sometimes the :class parameter comes first:

(1) "{"args":[347],"class""AsyncCallback"}"
(2) "{"class""AsyncCallback", "args":[347]}"

I can only reproduce it on a staging server (but not on development). Furthermore, within the same rails console I will always get the same version back. But when I restart the console then usually I can see the other version being returned. It seems to alternate but even if not, I can usually get the other version within a few console restarts.

I am using ruby 1.8.7 and Rails 3.0.19. I can overwrite the redis_key method in my class but I wonder if this is a generic issue that should be fixed in the plugin itself. Thanks.

How is this different from resque-lock?

Hey there,

I'm looking for a resque plugin that does not reject subsequent jobs but ensures that only one worker is working on a given "unique" job at any given time. I haven't found what I'm looking for but I'm looking at this plugin and https://github.com/defunkt/resque-lock as options for forking and modifying to suit my needs. Is this plugin much different from resque-lock?

Thanks,
Jonathan

NameError (uninitialized constant Resque::Plugins::UniqueJob)

I have installed the gem, required from initializers and restarted server. But I get this error.

Gemfile
gem "resque-loner"

config/initializers/resque.rb
require 'resque-loner'

RefreshItemsJob

class RefreshItemsJob < ApplicationJob
  include Resque::Plugins::UniqueJob

undefined method `inline?' for #<Module:0x00000102114998> in Rails 3.1 w/ Ruby 1.9.3

I'm in the process of upgrading a Rails 2.3.14 app to Rails 3.1. The Rails 2 app successfully used Resque Loner, but whenever the upgraded code attempts to enqueue something, the following error is returned:

undefined method inline?' for #Module:0x00000102114998`

This is the resque portion of the trace:

resque-loner (1.2.0) lib/resque-ext/job.rb:15:in create_with_loner'
resque (1.8.2) lib/resque.rb:191:in enqueue'

Any suggestions on where to look for a fix?

resque loner thinks job is queued when queue is empty

I'm using

gem 'resque-retry', git: 'https://github.com/lantins/resque-retry.git', ref: '9314b34d543bc87668fd6107148fd1c8dd7d61a2'
gem 'resque-pool'
gem 'resque-multi-job-forks'

if I check for a job that's already been queued, but completed, it says that it's already enqueued.

halp!

Not compatible with resque-status

Resque-loner does not work when also using the resque-status plugin since the latter adds a job_id to the payload and so that confuses resque-loner into thinking that the job does not already exist and thus allows. I am not sure there’s much to do about this but I’m just opening this ticket as a reference for others.

For now, I’m trying to work around the issue by monkey patching the redis_key function to remove the job_id from the payload before calculating the md5 hash.

Here’s the code:

# Monkey patch resque-loner so that it plays ball with resque-status.
module Resque
  module Plugins
    module UniqueJob
      module ClassMethods
        def redis_key(payload)
          payload = decode(encode(payload)) # This is the cycle the data goes when being enqueued/dequeued
          job  = payload[:class] || payload['class']
          args = (payload[:args]  || payload['args'])

          # This is a quick hack so that this library works with resque-status.
          # Resque-status adds a job_id as the first element of args, and this
          # messes up how resque-loner determines if a job is already queued or
          # not. By removing the job_id, everything should be A-OK. Only workers
          # that include resque-loner should be affected. Small caveat, when calling
          # Resque.enqueued? that comes with resque-loner, there’s no job_id to
          # remove so we must do nothing in that case.
          if respond_to?(:create)
            args.shift unless caller.index { |e| e =~ /enqueued\?/ }
          end

          args.map! do |arg|
            arg.is_a?(Hash) ? arg.sort : arg
          end

          digest = Digest::MD5.hexdigest(encode(class: job, args: args))
          digest
        end
      end
    end
  end
end

Thoughts on unique working jobs?

Great gem, does exactly what we need, but we discovered for our application that we also need to ensure a job remains unique while also being processed. I added a small helper in our application not to enqueue jobs whose class/payload exists in any of the workers in Resque::Worker.working mainly because I wasn't quite sure where it would go in relation to this gem (if at all). Anyone have any thoughts on this? Would it make sense to include in this gem? Thanks.

Sometimes Loner "locks" a given job when it completed correctly

Hi!

First of thanks for the great gem, I've been using it extensively on a large scale application, but recently I'm seeing some issue.

It seems that sometimes, even though the job is performed correctly, loner will not remove the lock.
It appears sporadically and I don't see a pattern for it.

The thing is that I use resque to process hundreds of jobs per seconds, so I was wondering if you saw a point in loner where this could happen. For instance if there are too many access at once etc.

I kind of "fixed" the problem by adding a TTL, but some queues can't afford to be locked for more than a couple of minutes. I decided to remove loner on these kind of queues, but I'd rather keep it for security and get loner to work perfectly.

I'm using a custom redis_key as well, not sure if it could be related.

If you have any ideas regarding this or if you saw the same behavior somewhere it would interest me a lot.

Thanks

Creating duplicate jobs

I've configured everything OK I believe, putting resque-loner in the Gemspec, and subclassing from the UniqueJob class.

But in testing, I'm enqueing on every page load to test, and its spawning workers on each page load instead of preventing it.

I'm verifying through redis and through resque-web.

How to set all queues to unique job by default

Hi,

Thanks for your contribution, it's pretty helpful gem. However, currently, I have dozen queues and it's quite depressed to add include Resque::Plugins::UniqueJob for those queues. So I'd like to request a feature that set unique jobs for all queues by default. Feel free to mark as done or remove this issue if you think this feature is not necessary.

Thanks,

Resque 2.x Support (V2)

Appending to #55 , I found that the gem spec already supports Resque >= 1.27 however it's locked on ~> 1.0 on RubyGems so I couldn't use the latest version of Resque because of that. Is that done on purpose?

Is there any possibility to build and push another version using the current gemspec file so it can reflect on RubyGems?

Error when removing queue

When I remove a queue without any scheduled jobs, I get the following error:

RuntimeError: ERR wrong number of arguments for 'del' command
    redis (2.2.0) lib/redis/client.rb:39:in `call'
    redis (2.2.0) lib/redis.rb:674:in `block in del'
    /usr/local/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize'
    redis (2.2.0) lib/redis.rb:673:in `del'
    /usr/local/lib/ruby/gems/1.9.1/gems/redis-namespace-1.0.3/lib/redis/namespace.rb:213:in `method_missing'
    /usr/local/lib/ruby/gems/1.9.1/gems/resque-loner-0.1.3/lib/resque-loner/helpers.rb:54:in `cleanup_loners'
    /usr/local/lib/ruby/gems/1.9.1/gems/resque-loner-0.1.3/lib/resque-ext/resque.rb:26:in `remove_queue_with_loner_cleanup'

This is with resque-loner 0.1.3, Resque 1.16.1 and Redis gem 2.2.0.

The following monkey-patch solves the issue for me:

module Resque
  module Plugins
    module Loner
      class Helpers
        def self.cleanup_loners(queue)
          keys = redis.keys("loners:queue:#{queue}:job:*")
          redis.del(*keys) unless keys.empty?
        end
      end
    end
  end
end

Loner doesn't delete key in redis with some conditions

Hello,

we're using resque-loner and first of all Thank you for such a great product!

We're currently investigating, but looks like with some specific conditions as for now we found
just difference in Redis version 2.4.2 and 2.4.4 loner doesn't remove key after successful run.

UniqueJob to be a module

I don't understand why UniqueJob is a class but not a module?
Class is not as flexible as module because there should be only one inherited class, but there could be multiple included modules.

We need to use resque loner with in combination with resque-status.

Resque-status is using class as well, so they can not be combined.

help testing against edge resque

Hey there!

I'm gearing up to work on Resque 2.0, and I'd like to coordinate better with plugin authors to make sure stuff doesn't break.

I'd like to know a few things:

  1. Can I do something to help get you testing against edge Resque?
  2. Are you monkey-patching anything in Resque currently? Can 2.0 expose an API to help you not have to do that any more?
  3. Do you need any help in bringing your plugin up-to-date with the latest Resque?

Thanks!

Related: https://github.com/defunkt/resque/issues/880

Does this work with enqueue_in ?

Its seems like it might not? since jobs are getting created even though there are identical jobs enqueued to say run in 15minutes from now.

Thanks Josh

Resque 2.x Support

Summary

I noticed there is a three-year-old pull request1 sitting around to try to get this lovely little gem bumped up to the next major version of Resque. I was wondering if perhaps this gem was still being maintained, and if so, if there is any plan to get some traction on supporting the latest version of Resque?

Footnotes

  1. https://github.com/resque/resque-loner/pull/53

old jobs not re-initializing when not first initialized with resque loner

Hi,

I have an issue with resque/resque-loner right now. I have a button to generate zips for around 400 files. When a site admin changes data about a photo, the event zip needs to be re-initiailzed.

For some reason, SOME jobs are being re-initialized however, when using resque loner, only some jobs don't get re-added to the queue and I'm wondering if perhaps I need to do something to my old jobs to get them to re-initialize (old as in before I started using resque loner)... Also are there some instances where resque does not show a pending job but resque loner has a job as pending, hence cannot re-enqueue the job?

Resque-web "Remove queue" feature workaround ?

Hi,

your plugin is really great... One problem with Resque-web is that you must not use the "Remove queue" button, because it will remove all the jobs from the queue but let the loners keys.

What is your workaround for this ? Monkey-patching or extending resque-web to handle loners keys ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.