Coder Social home page Coder Social logo

redislock's Introduction

redislock

Test GoDoc License

Simplified distributed locking implementation using Redis. For more information, please see examples.

Examples

import (
  "context"
  "fmt"
  "log"
  "time"

  "github.com/bsm/redislock"
  "github.com/redis/go-redis/v9"
)

func main() {
	// Connect to redis.
	client := redis.NewClient(&redis.Options{
		Network:	"tcp",
		Addr:		"127.0.0.1:6379",
	})
	defer client.Close()

	// Create a new lock client.
	locker := redislock.New(client)

	ctx := context.Background()

	// Try to obtain lock.
	lock, err := locker.Obtain(ctx, "my-key", 100*time.Millisecond, nil)
	if err == redislock.ErrNotObtained {
		fmt.Println("Could not obtain lock!")
	} else if err != nil {
		log.Fatalln(err)
	}

	// Don't forget to defer Release.
	defer lock.Release(ctx)
	fmt.Println("I have a lock!")

	// Sleep and check the remaining TTL.
	time.Sleep(50 * time.Millisecond)
	if ttl, err := lock.TTL(ctx); err != nil {
		log.Fatalln(err)
	} else if ttl > 0 {
		fmt.Println("Yay, I still have my lock!")
	}

	// Extend my lock.
	if err := lock.Refresh(ctx, 100*time.Millisecond, nil); err != nil {
		log.Fatalln(err)
	}

	// Sleep a little longer, then check.
	time.Sleep(100 * time.Millisecond)
	if ttl, err := lock.TTL(ctx); err != nil {
		log.Fatalln(err)
	} else if ttl == 0 {
		fmt.Println("Now, my lock has expired!")
	}

}

Documentation

Full documentation is available on GoDoc

redislock's People

Contributors

bhallionohbibi avatar calvinxiao avatar dim avatar git-hulk avatar spike014 avatar tinoquang avatar ttacon avatar vmihailenco avatar yansal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redislock's Issues

Should here return `context.Cause(ctx)` or the error warp with `context.Cause(ctx)` when `ctx.Done()` ?

redislock/redislock.go

Lines 88 to 93 in 9348b33

select {
case <-ctx.Done():
return nil, ErrNotObtained
case <-ticker.C:
}

Should here return context.Cause(ctx) or the error warp with context.Cause(ctx) when ctx.Done() ?

Because ctx may contain the CancelCause which inside the user's parent context generated by context.WithCancelCause.

I will contribute a PR later. If there is any problem, please let me know.

lock.Release() unable to release lock.

The lock timeout becomes 0 after the set operation. That's why the lock.Release(ctx) throwing LockNotHeld error.

Attached is the value of lock timeout at various places...

  1. 24.9s after obtain()
  2. 24.899s after lock.TTL()
  3. 0s just after set()
  4. 0s just before lock.Release(ctx)

image

the Obtain may be have a problem

// Obtain tries to obtain a new lock using a key with the given TTL.
// May return ErrNotObtained if not successful.
func (c *Client) Obtain(ctx context.Context, key string, ttl time.Duration, opt *Options) (*Lock, error) {
	// Create a random token
	token, err := c.randomToken()
	if err != nil {
		return nil, err
	}

	value := token + opt.getMetadata()
	retry := opt.getRetryStrategy()
      
        // todo:  Having only one context may invalidate a retry
	deadlinectx, cancel := context.WithDeadline(ctx, time.Now().Add(ttl))
	defer cancel()

	var timer *time.Timer
	for {
		ok, err := c.obtain(deadlinectx, key, value, ttl)
		if err != nil {
			return nil, err
		} else if ok {
			return &Lock{client: c, key: key, value: value}, nil
		}

		backoff := retry.NextBackoff()
		if backoff < 1 {
			return nil, ErrNotObtained
		}

		if timer == nil {
			timer = time.NewTimer(backoff)
			defer timer.Stop()
		} else {
			timer.Reset(backoff)
		}

		select {
                //  todo : the deadtime should be multiplied by retry max
		case <-deadlinectx.Done():
			return nil, ErrNotObtained
		case <-timer.C:
		}
	}
}

Two questions about the implementation

Hey, thanks for the awesome work! And I have two questions about current implementation, so I open this issue.

1. Race condition

I found this in official website section Distributed locks with Redis:

Superficially this works well, but there is a problem: this is a single point of failure in our architecture. What happens if the Redis master goes down? Well, let’s add a slave! And use it if the master is unavailable. This is unfortunately not viable. By doing so we can’t implement our safety property of mutual exclusion, because Redis replication is asynchronous.

There is an obvious race condition with this model:

  • Client A acquires the lock in the master.
  • The master crashes before the write to the key is transmitted to the slave.
  • The slave gets promoted to master.
  • Client B acquires the lock to the same resource A already holds a lock for. SAFETY VIOLATION!

What do you think?

2. Auto refresh

Assuming we have code snippet like

func main() {
	locker := redislock.New(client)

	ctx := context.Background()

	lock, err := locker.Obtain(ctx, "my-key", 100*time.Millisecond, nil)
	if err == redislock.ErrNotObtained {
		fmt.Println("Could not obtain lock!")
	} else if err != nil {
		log.Fatalln(err)
	}
	defer lock.Release(ctx)
	
	// Try to do business logic here, and it exceeds expiration time
	// Then other locker(s) can obtain the lock before it's released
}

We may get problem when business logic takes longer time than the expiration time.

My solution to solve this issue is add a watch dog as Redission does:

// If user passes AutoRefresh as the ttl param, we should set a default
// ttl for this lock, and auto refresh it after every ttl/3 duration.
const AutoRefresh = time.Duration(0)

// watch extends the lock with a new TTL automatically.
func (l *Lock) watch() {
	// Implementation detail
}

But yeah, this can be implemented by users themselves.

Thanks!
Kiyon.

Doesn't work with redis/v9

When I have this code

rc := GetRedisClient()
locker  := redislock.New(rc)

And my redis helper file, uses redis v9 like below..

package main

import (
  "github.com/ilyakaznacheev/cleanenv"
  "github.com/go-redis/redis/v9"
  "github.com/rs/zerolog/log"
)

 ...
 ...

func GetRedisClient() *redis.Client{

  if client == nil {
    //fmt.Println("Obtaining a connection to Redis...")
    log.Print("Obtaining connection to Redis...")
    client = redis.NewClient(&redis.Options{
      Addr: cfg.Addr,
      Password: cfg.Password,
      DB: cfg.DB,
    })
  } else {
    log.Print("Reusing connection to Redis...")
  }
  return client
}

Then, I get an error
cmd/app/game-controller.go:85:27: cannot use rc (variable of type *"github.com/go-redis/redis/v9".Client) as type redislock.RedisClient in argument to redislock.New:
*"github.com/go-redis/redis/v9".Client does not implement redislock.RedisClient (wrong type for Eval method)
have Eval(ctx context.Context, script string, keys []string, args ...interface{}) *"github.com/go-redis/redis/v9".Cmd
want Eval(ctx context.Context, script string, keys []string, args ...interface{}) *"github.com/go-redis/redis/v8".Cmd
make: *** [makefile:8: build] Error 2

panic

panic stack:

runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:260 (0x4500f5)
/usr/local/go/src/runtime/signal_unix.go:835 (0x4500c5)
/go/pkg/mod/github.com/bsm/[email protected]/redislock.go:175 (0xb05410)

No provision for using KeepTTL option of redis server itself

I was trying to use the libray to lock a resource perpetually without a TTL
we can do that using the redis.KeepTTL (-1) option in the go-redis library directly in the SetNX, SetXX
but the libray does not allow it even while using SetNX function of RedisClient interface

Allow setting/disabling Obtain deadline in Options

The current behavior is to simply use the ttl as the deadline, but you might want to try for a longer period than the ttl, possibly even indefinitely, I believe it should be supported to set the deadline via the options, including the ability to have no deadline, allowing to rely on the context cancellation/deadline instead.

I had to workaround this in our code by not using the builtin RetryStrategy and implementing the loop by ourselves.

Global locker object or local

Hi,

I have checked that having a global locker object or creating a new one at the point of acquiring a lock both works.

However what is the best practice that you would suggest, create globally or at the point of acquiring the lock.

Thanks.
Mahesh S.

Support for multiple clients

Redlock specification talks about multiple independent redis servers to do distributed locking.

Does this library support that?

redis.Nil check for SetNX

if err == redis.Nil {

Maybe I'm misunderstanding something, but why is the code checking redis.Nil for a call to SetNX ?
I thought that it was returned only for Get.
I don't see anything in the documentation about this.

redis/v8 support

I am following the example listed in the go doc but getting an error when passing the v8 redis client to redislock.New(client). It is saying it's missing ScriptExists(scripts ...string) *redis.BoolSliceCmd because the method in v8 has the context as the first param. Let me know if you are seeing the same thing and if you want me to make a PR

Strange behavior when locking in for loops

hey, I'm working on a program which I need to lock some keys in Redis every 10 second and do some operations! how ever I have faced a strange problem. these is part of my code which works every 10 seconds:

		keys, err := services.GetAllKeys(tempKey)
		if err != nil {
			continue
		}
		for _, key := range keys
			go processDeviceLogs(key)
		}

The way it works is that it gets all the keys I want to use (keys) and passes them to a go routine, inside the routine:
lock, err := services.ObtainLock(key)
which services.ObtainLock looks like

func ObtainLock(key string) (*redislock.Lock, error) {
	lock, err := BeethovenCacheProvider.Locker.Obtain(key, 200*time.Millisecond, nil)
	if err == redislock.ErrNotObtained {
		log.Errorf("Could not obtain lock: %v", err)
		return lock, err
	} else if err != nil {
		log.Errorf("Error when locking key %v : %v", key, err)
		return lock, err
	}
	return lock, nil
}

The problem is that the locking mechanism does not work and none of the keys get locked! I did some debugging add added some statements:

 	keys, err := services.GetAllKeys(tempKey)
		if err != nil {
			continue
		}
		for _, key := range keys{
		lock, err := services.BeethovenCacheProvider.Locker.Obtain("device_log:-:device_id:-:1", 200*time.Second, nil)
		lock2, err := services.BeethovenCacheProvider.Locker.Obtain("device_log:-:device_id:-:2", 200*time.Second, nil)
		if err == redislock.ErrNotObtained {
			log.Errorf("Could not obtain lock for key %v : %v", "device_log:-:device_id:-:1", err)
		} else if err != nil {
			log.Errorf("Error when locking key %v : %v", "device_log:-:device_id:-:1", err)
		}
		time.Sleep(100 * time.Second)
		_ = lock
		_ = lock2
		_ = key
			//go processDeviceLogs(key)
		}

after checking each part I understood that the problem is no related to the go-routine or getting the keys from Redis and not even using the keys!, the problem was actually the for loop! so the above code doesn't work but when I change the for loop to :

		keys, err := services.GetAllKeys(tempKey)
		if err != nil {
			continue
		}
		_ =keys
		for i := 1; i < 5; i++{
		lock, err := services.BeethovenCacheProvider.Locker.Obtain("device_log:-:device_id:-:1", 200*time.Second, nil)
		lock2, err := services.BeethovenCacheProvider.Locker.Obtain("device_log:-:device_id:-:2", 200*time.Second, nil)
		if err == redislock.ErrNotObtained {
			log.Errorf("Could not obtain lock for key %v : %v", "device_log:-:device_id:-:1", err)
		} else if err != nil {
			log.Errorf("Error when locking key %v : %v", "device_log:-:device_id:-:1", err)
		}
		time.Sleep(100 * time.Second)
		_ = lock
		_ = lock2
		//_ = key
			//go processDeviceLogs(key)
		}

the above code works just fine and I just changed the for loop ... I really don't get what is happening but I would appreciate some help. Thanks for your library.

[Proposal] Allow redislock to compatible with multi versions of go-redis

Currently, redislock only depends on the Redis EVAL/EVALSHA command
but directly introduces the go-redis/v9 scripter as the RedisClient interface.
And make it incompatible with other versions of the Redis client like go-redis/v8.

I think we can avoid binding to any versions by moving the scripter outside the go-redis.

I will be happy to support this if it sounds good to you.

related issue: #48

Uncompatible with new version.

Thanks for this good repo.
Now I am trying to update the version get this:

cannot use redisClient (type *"github.com/go-redis/redis".Client) as type redislock.RedisClient in argument to redislock.New:
        *"github.com/go-redis/redis".Client does not implement redislock.RedisClient (wrong type for Eval method)
                have Eval(string, []string, ...interface {}) *"github.com/go-redis/redis".Cmd
                want Eval(string, []string, ...interface {}) *"github.com/go-redis/redis/v7".Cmd

Generic Interface

The interface in the package is closely coupled with github.com/go-redis/redis/. In my application I use https://github.com/gomodule/redigo this package does not have return types like redis.BoolSliceCmd.

Suggestion can we make the interface more generic?

// RedisClient is a minimal client interface.
type RedisClient interface {
	SetNX(key string, value interface{}, expiration time.Duration) *redis.BoolCmd
	Eval(script string, keys []string, args ...interface{}) *redis.Cmd
	EvalSha(sha1 string, keys []string, args ...interface{}) *redis.Cmd
	ScriptExists(scripts ...string) *redis.BoolSliceCmd
	ScriptLoad(script string) *redis.StringCmd
}

Does not redislock work properly with multiple Go gin API at a time?

Hi, I made an API that receive file

func main() {
	route := gin.Default()
	route.POST("/upload",func(c *gin.Context) {
	        //process file
		clientL := redis.NewClient(&redis.Options{
		                Network:  "tcp",
		                Addr:     "localhost:6379",
		                DB:       0,
	                })
	        defer clientL.Close()
	        locker := redislock.New(clientL)
	        ctx := context.Background()
	        lock, err := locker.Obtain(ctx, "file", 5000*time.Millisecond, nil)
	        if err != nil {
		        fmt.Println(err)
		        return
	        }
	        defer lock.Release(ctx)
                fmt.Println("pass")
                //return response
	})
	route.Run("0.0.0.0:8080")
}

Normally, I think when there are multiple requests at the same time want to upload a file. They must pause at the line I obtain the lock, if my lock is released or timeout, they will continue. It it doesn't, the request will fail. But when I make a client that sends 4 requests at the same time to my API, they can obtain the lock for all of them and it continues for next calls.

Did I do something wrong or I misunderstand something in this concept, please help!

Please Follow Semantic Import Versioning

Thank you for this package. My company uses it to maintain distributed locks. We appreciate your contribution, but would more appreciate it if your feature updates were backwards compatible. Changing the signature of methods causes an issue when the major version is not bumped because Go does not expect it, and we get build and import errors. There's nothing wrong with moving to a v2 at some point.

A question about ttl

deadlinectx, cancel := context.WithDeadline(ctx, time.Now().Add(ttl))

redislock/redislock.go

Lines 89 to 91 in 97011e6

case <-deadlinectx.Done():
return nil, ErrNotObtained
case <-timer.C:

Why is the deaaline here related to the value of ttl and not another option, if I want to keep blocking and waiting until the other party actively releases this lock, I find I have no way to implement this requirement.

when obtain have retry, hope the lock ttl will auto delay

lock, err := locker.Obtain(ctx, fmt.Sprintf("update-ml-folder:%v", folder.ID), 12*time.Second, &redislock.Options{ RetryStrategy: redislock.LimitRetry(redislock.LinearBackoff(2*time.Second), 30), })

like this code, it will timeout after 12 second(if some one also hold the lock), I hope it will wait 60 second and when it get the lock success,also will hold the lock 12 second

How to get lock with a key

This is not an issue, just a question.
How do we get lock having just a key value.
We need this option in the scenarios when we acquire a lock in a thread and try to release the same lock in the other thread.
Thanks team,

Support go-redis V7.2.0

Hi there,

Since go-redis/redis v7.2.0 is stable now, using the latest go-redis with redislock will lead to error:

Cannot use 'client' (type *redis.Client) as type RedisClient Type does not implement 'RedisClient' need method: ScriptExists(scripts ...string) *redis.BoolSliceCmd have method: ScriptExists(hashes ...string) *BoolSliceCmd

Is there any plan to support this version?

Thanks.

README Examples Can't run

lock, err := locker.Obtain(ctx, "my-key", 100*time.Millisecond, nil)
if err == redislock.ErrNotObtained {
fmt.Println("Could not obtain lock!")
} else if err != nil {
log.Fatalln(err)
}

RedisClient error

I try your example code but doesn't working.

Go version: go1.19.1 windows/amd64
redislock version in go mod file: github.com/bsm/redislock v0.8.0
redis version in go mod file: github.com/go-redis/redis/v9 v9.0.0-beta.3

I have error as below.

redislock.go:149:31: cannot use l.client.client (variable of type RedisClient) as type redis.Scripter in argument to
luaPTTL.Run:
RedisClient does not implement redis.Scripter (missing EvalRO method)
C:\Users\P1608\go\pkg\mod\github.com\bsm\[email protected]\redislock.go:166:37: cannot use l.client.client (variable of type RedisClient) as type redis.Scripter in argument to
luaRefresh.Run:
RedisClient does not implement redis.Scripter (missing EvalRO method)
C:\Users\P1608\go\pkg\mod\github.com\bsm\[email protected]\redislock.go:178:34: cannot use l.client.client (variable of type RedisClient) as type redis.Scripter in argument to
luaRelease.Run:
RedisClient does not implement redis.Scripter (missing EvalRO method)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.