sidekiq-unique-jobs adds unique constraints to sidekiq jobs. The uniqueness is achieved by creating a set of keys in redis based off of queue, class, args (in the sidekiq job hash).
Installation
You can install it as a gem:
1
$ gem install sidekiq-unique-jobs
or add it into a Gemfile (Bundler):
1 2 3 4 5
# Gemfile
# sidekiq-unique-jobs | RubyGems.org | your community gem host # https://rubygems.org/gems/sidekiq-unique-jobs gem 'sidekiq-unique-jobs', `7.1.2'
Sidekiq.configure_server do |config| config.redis = { url:ENV["REDIS_URL"], driver::hiredis }
config.client_middleware do |chain| chain.add SidekiqUniqueJobs::Middleware::Client end
config.server_middleware do |chain| chain.add SidekiqUniqueJobs::Middleware::Server end
SidekiqUniqueJobs::Server.configure(config) end
Sidekiq.configure_client do |config| config.redis = { url:ENV["REDIS_URL"], driver::hiredis }
config.client_middleware do |chain| chain.add SidekiqUniqueJobs::Middleware::Client end end
Usages
Your first worker
The most likely to be used worker is :until_executed. This type of lock creates a lock from when UntilExecutedWorker.perform_async is called until right after UntilExecutedWorker.new.perform has been called.
defperform logger.info("cowboy") sleep(1) # hardcore processing logger.info("beebop") end end
NOTE Unless a conflict strategy of :raise is specified, if lock fails, the job will be dropped without notice. When told to raise, the job will be put back and retried. It would also be possible to use :reschedule with this lock.
NOTE Unless this job is configured with a lock_timeout: nil or lock_timeout: > 0 then all jobs that are attempted to be executed will just be dropped without waiting.
There is an example of this to try it out in the myapp application. Run foreman start in the root of the directory and open the url: localhost:5000/work/duplicate_while_executing.