Coder Social home page Coder Social logo

cache's People

Contributors

858806258 avatar appleboy avatar aviddiviner avatar dependabot[bot] avatar dpordomingo avatar easonlin404 avatar guutong avatar hashworks avatar ianberdin avatar iceyer avatar inooka-shiroyuki avatar javierprovecho avatar kimiazhu avatar llinder avatar manucorporat avatar mcastilho avatar mirzac avatar mnuma avatar mopemope avatar oryband avatar ptrkrlsrd avatar rfyiamcool avatar robinmao avatar saschat avatar silvercory avatar steeve avatar thinkerou avatar turtlemonvh avatar utrack avatar yuyabee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cache's Issues

does this support dynamic key for a route?

lets say we have an authenticated required api endpoint GET /userdata

that returns details about the authenticated user that calls it

user1 calls GET /userdata and wants response for user1
user2 calls GET /userdata and wants response for user2

does this middleware support the above?

how do we achieve this? because currently the cache just creates one single key for each route endpoint and that does not work and i would think others have the use-case i described above

Support memcached client with binary protocol

Currently the memcached store is only backed by gomemcache which only supports the ASCII protocol and no SASL authentication. As such, this memcached store will not work in a cloud environment.
It would be nice if the memcached store could be backed with the mc client which has these features.

PS: I am willing to write a PR if you are open to it.

Can I ask you some questions ?

cache/cache.go

Line 113 in a8e2fb1

func Cache(store *persistence.CacheStore) gin.HandlerFunc {

Hi, I'm learning this project's code, I wanna ask you two questions:

  1. Why the Cache function use a interface pointer parameter? How can I use this function?
  2. Why the SiteCache function defined a unused parameter: expire ? What place should I use this function?

Can you give me a answer if you have time?Look forward to your reply,Thank you!

gopkg url?

kind of a newbie to this golang and gopkg stuff, but im using gin with gopkg.in/gin-gonic/gin.v1, and it wont let me use this package (types conflict...)

cannot use "github.com/gin-contrib/cache".CachePage(store, time.Second * 10, Search) (type "github.com/gin-gonic/gin".HandlerFunc) as type "github.com/aequasi/search.discordservers.com/vendor/gopkg.in/gin-gonic/gin.v1".HandlerFunc in argument to r.RouterGroup.GET

How to set custom http header when the cache hit?

Hi, I wonder if I can set a customer header to inform the client that the cache has been hit. Is this even possible?
I didn't find any resolutions through documents and issues. I'll be grateful if there are some workarounds.

Thanks!

Appending cache values

image

is this issue or feature? I dont understand why check cache again and append the value if there are some value?

First time we check the cache when initializing CachePage middleware and this is the second one. There is a problem with concurrently similar requests.

[Feature Request] Add ability to ignore GET query parameters

Otherwise one could always cause cache misses by adding various GET query parameters like ?foo=1.

func TestCachePageWithoutQuery(t *testing.T) {
	store := persistence.NewInMemoryStore(60 * time.Second)

	router := gin.New()
	router.GET("/cache_without_query", CachePageWithoutQuery(store, time.Second*3, func(c *gin.Context) {
		c.String(200, "pong "+fmt.Sprint(time.Now().UnixNano()))
	}))

	w1 := performRequest("GET", "/cache_without_query?foo=1", router)
	w2 := performRequest("GET", "/cache_without_query?foo=2", router)

	assert.Equal(t, 200, w1.Code)
	assert.Equal(t, 200, w2.Code)
	assert.Equal(t, w1.Body.String(), w2.Body.String())
}

Cache in InMemoryStore is broken

You may find the source code at https://github.com/fzlee/GoldenFly. it's a personal blog

In all, the following code demonstrates the way I use this middleware

// main.go,  init memory store
store := persistence.NewInMemoryStore(5 * time.Second)
// page/router.go, creating cached view
router.GET("/articles-sidebar/", cache.CachePage(store, time.Second * 10, PageSideBarView))

You may reproduce this issue by the following steps:

  1. goto http://ifconfiger.com
  2. open web developer tool
  3. refresh this web page for several times

there are chances we could get an invalid JSON response from the server like this:
Screen Shot 2019-12-24 at 13 15 33

seems the memory block which we used to keep response cache is modified by other requests.

Fail serving images

Cached request fails because of Content-Type: application/octet-stream when a route returns an image

	r.GET("/cache", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		img := image.NewRGBA(image.Rect(0, 0, 640, 480))
		blue := color.RGBA{0, 0, 255, 255}
		draw.Draw(img, img.Bounds(), &image.Uniform{blue}, image.ZP, draw.Src)

		err := jpeg.Encode(c.Writer, img, &jpeg.Options{
			Quality: jpeg.DefaultQuality,
		})
		if err != nil {
			_ = c.AbortWithError(http.StatusInternalServerError, errors.New("something went wrong"))
			return
		}
	}))

Delete cached elements

hello, everybody.

I faced one issue. I want to remove some elements from redis cache by some pattern, not all elements. Can you add this kind of feature?)

statistics about cached pages / invalidation

Hi guys,

Hope you are all well !

I know that I am posting this issue in a desert of non-replied issues, despite being part of the awesome gin-gonic bundle, :-) but I was wondering how complicated it would be to have stats controller with the list of cached pages with their expiration date.

And ultimately, it would be nice to invalidate a cached page. But how to do it ?

Thanks for any insights or inputs on that.

Cheers,
Luc Michalski

Headers are doubled instead of ignored/overriden

If one uses middleware like secure to set headers they will be doubled if one isn't using CachePageWithoutHeader:

HTTP/1.1 200 OK
[...]
Referrer-Policy: no-referrer
[...]
HTTP/1.1 200 OK
[...]
Referrer-Policy: no-referrer
Referrer-Policy: no-referrer
[...]

Instead it should only set headers that don't exist or overwrite them instead of adding a new one.

Cache containing data multiple time

After 12 day in production without any problem. I received a strange response from a cached route.

By default my response is a JSON Object like this:

{
     "data":{}
}

In my case cached response look like this:

{
     "data":{}
}{
     "data":{}
}{
     "data":{}
}

This result as a non parseable JSON.

Is it a problem with concurrency, how to avoid this ?
Is there an implementation solution to avoid multiple concurrent call to be cached in the same time ?

Thanks

SiteCache cannot work as middleware ?

I thought SiteCache was a middleware, and when I used it, it didn't work.
I saw the source code as following, I noted:

func SiteCache(store persistence.CacheStore, expire time.Duration) gin.HandlerFunc {
	return func(c *gin.Context) {
		var cache responseCache
		url := c.Request.URL
		key := CreateKey(url.RequestURI())
		if err := store.Get(key, &cache); err != nil {
			c.Next() // just call the next handler, why not saving the response ? like other Decorator 
		} else {
			fmt.Println("cache hited: ", string(cache.Data))
			c.Writer.WriteHeader(cache.Status)
			for k, vals := range cache.Header {
				for _, v := range vals {
					c.Writer.Header().Set(k, v)
				}
			}
			c.Writer.Write(cache.Data)
		}
	}
}

then I changed code as following, and it worked as what i want:

func SiteCache(store persistence.CacheStore, expire time.Duration) gin.HandlerFunc {
	return func(c *gin.Context) {
		var cache responseCache
		url := c.Request.URL
		key := CreateKey(url.RequestURI())
		if err := store.Get(key, &cache); err != nil {

			if err != persistence.ErrCacheMiss {
				log.Println(err.Error())
			}
			// replace writer
			writer := newCachedWriter(store, expire, c.Writer, key)
			c.Writer = writer
			c.Next()
			// Drop caches of aborted contexts
			if c.IsAborted() {
				store.Delete(key)
			}

		} else {
			c.Writer.WriteHeader(cache.Status)
			for k, vals := range cache.Header {
				for _, v := range vals {
					c.Writer.Header().Set(k, v)
				}
			}
			c.Writer.Write(cache.Data)
			c.Abort()
		}
	}
}

so, what is the SiteCache designd for ? A middleware or not ? I'm confused ~
waiting for your answer

Proposal to use Souin as an HTTP cache system

Hello there, first thank you for your awesome work on gin and the ecosystem.
I would know if it would be accepted if I apply for integrate the Souin HTTP cache system as the cache management system in the gin cache contrib middleware ?
It supports a distributed and not distributed storage systems using Olric (the distributed one) and Badger (the non distributed), it suits the RFC-7234 and supports the Cache-Status HTTP header with the associated new RFC.
I already wrote a middleware handler for Træfik, caddy (which one will replace the official one soon), for Tyk and echo.
Souin can invalidate the CDN placed on top of your stack such as Cloudflare, Fastly, ... and can set the returned data into the CDN cache to be served as fast as possible. This would delegate the core management to a fully working solution which respect the standards.

Open to discuss about it with you ✌️.

Unable to create shared cache for several instances of the same http-service

Hello.

We are running a several instances of a service with shared redis-cache and we found some concurrency problems.

If I understood the logic correctly, it is like this:

  1. Check the cache.
  2. If there is a cached value, then return it.
  3. If not, then create a writer and run handler.
  4. When handler is done, write response to cache.
  5. The most unobvious part: if there is a cached value (so it must have appeared during step 3), append the current response data to it, instead of overwriting it.

    cache/cache.go

    Lines 84 to 86 in 6b4ffed

    if err := store.Get(w.key, &cache); err == nil {
    data = append(cache.Data, data...)
    }
  6. ???
  7. PROFIT

So, when we send two equal queries and our balancer sends them to different instances, we will get the problem, mentioned in this closed issue: #15 Because two instances will both run handlers and the first one to finish will add the value to the cache and the second one will append it again.

What is the logic behind step 5? Maybe, we are using this lib incorrectly?

Thank you.

Why make CachePage public?

CachePage doesn't insure thread safe. In the high concurrency scenario, CachePage will make response data chaos.
I think CachePageAtomic is the only right choice. Why don't make CachePage private directly?

cache doesn't work.

Hello, I have some problem when i used this middleware.
Both SiteCache and CachePage will never hit the cache.
I think the reason is the cachedWriter.Write function does not executed.

Not caching 204

I have made a small POC. As per code and comments in middleware, it will cache any response with response code < 300, but its caching only 200.

Screenshot 2019-12-02 at 2 35 46 PM

Output always results in status code 200, should be cached

Returning a body and any status code that is not 200 always results in a cache page with a 200 status code. This even occurs at AbortWithStatusJSON! The status code should be cached and any aborted request should not be cached.

package main

import (
	"github.com/gin-contrib/cache"
	"github.com/gin-contrib/cache/persistence"
	"github.com/gin-gonic/gin"
	"time"
)

func main() {
	gin.DisableConsoleColor()

	router := gin.Default()
	store := persistence.NewInMemoryStore(time.Minute)

	router.GET("/status500", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		c.Status(500)
	}))

	router.GET("/string500", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		c.String(500, "500 error")
	}))

	router.GET("/abort500", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		c.AbortWithStatus(500)
	}))

	router.GET("/abortJSON500", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		c.AbortWithStatusJSON(500, map[string]string{"foo": "bar"})
	}))

	router.GET("/teapot", cache.CachePage(store, time.Minute, func(c *gin.Context) {
		c.String(418, "I’m a teapot")
	}))

	router.Run("127.0.0.1:8000")
}
// Simply calling status without serving a body works:
[GIN] 2018/09/13 - 13:02:45 | 500 |       2.679µs |       127.0.0.1 | GET      /status500
[GIN] 2018/09/13 - 13:02:45 | 500 |       6.659µs |       127.0.0.1 | GET      /status500

// A string as a body doesn't, results in a cached 200:
[GIN] 2018/09/13 - 13:02:51 | 500 |      19.413µs |       127.0.0.1 | GET      /string500
[GIN] 2018/09/13 - 13:02:52 | 200 |       8.176µs |       127.0.0.1 | GET      /string500

// Simple abort works again, because it doesn't serve a body
[GIN] 2018/09/13 - 13:02:57 | 500 |       4.196µs |       127.0.0.1 | GET      /abort500
[GIN] 2018/09/13 - 13:02:59 | 500 |        3.75µs |       127.0.0.1 | GET      /abort500

// Aborting with JSON doesn't work as well, results in a cached 200:
[GIN] 2018/09/13 - 13:03:03 | 500 |      35.687µs |       127.0.0.1 | GET      /abortJSON500
[GIN] 2018/09/13 - 13:03:04 | 200 |      22.353µs |       127.0.0.1 | GET      /abortJSON500

// Any other status codes don't get cached as well:
[GIN] 2018/09/13 - 13:03:09 | 418 |       7.608µs |       127.0.0.1 | GET      /teapot
[GIN] 2018/09/13 - 13:03:10 | 200 |       6.878µs |       127.0.0.1 | GET      /teapot

Can't use dep

When I install this package with dep, error is thrown.

./main.go:20:39: cannot use "${my-package-path}/vendor/github.com/gin-contrib/cache".CachePage(store, 60 * time.Second, func literal) (type "${my-package-path}/vendor/gopkg.in/gin-gonic/gin.v1".HandlerFunc) as type "${my-package-path}/vendor/github.com/gin-gonic/gin".HandlerFunc in argument to router.RouterGroup.GET
./main.go:20:65: cannot use func literal (type func(*"${my-package-path}/vendor/github.com/gin-gonic/gin".Context)) as type "${my-package-path}/vendor/gopkg.in/gin-gonic/gin.v1".HandlerFunc in argument to "${my-package-path}/vendor/github.com/gin-contrib/cache".CachePage

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.