Coder Social home page Coder Social logo

caddy-ratelimit's People

Contributors

dependabot[bot] avatar divergentdave avatar dunglas avatar icecodenew avatar inahga avatar mholt avatar mohammed90 avatar popcorn avatar steffenbusch avatar sylloger avatar tgeoghegan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caddy-ratelimit's Issues

Error during parsing: rate_limit is not a registered directive

Hi, I am trying to put together a Caddyfile. I am a complete newbie, literally started using Caddy today, so it's entirely possible that I misunderstood something. The error I am getting: Error during parsing: rate_limit is not a registered directive . Dockerfile at the end. If I take out the order directive, I get /etc/caddy/Caddyfile:49: unrecognized directive: rate_limit

{
  admin off

  order rate_limit before basicauth

  log {
    output file /var/log/access.log {
    roll_size 40MiB
    roll_uncompressed
    roll_local_time
    }
  }  
}

(common) {
  header /* {
    -Server
  }
}

http://mtkk.localhost {
  @static_asset {
    path_regexp static \.(webp|svg|css|js|jpg|png|gif|ico|woff|woff2)$
  }

  @hashed_asset {
    path_regexp static \.(css|js)$
  }

  log

  header {
  # # disable FLoC tracking
  # Permissions-Policy interest-cohort=()

  # # enable HSTS
  # Strict-Transport-Security max-age=31536000;

  # disable clients from sniffing the media type
  X-Content-Type-Options nosniff

  # clickjacking protection
  X-Frame-Options DENY

  # keep referrer data off of HTTP connections
  Referrer-Policy no-referrer-when-downgrade
}

rate_limit {
    distributed
    zone static_example {
      key    static
      events 100
      window 1m
    }
  }


  root * /var/www/uploads/img/
  file_server @static_asset
  reverse_proxy adonis_app:3333
  encode zstd gzip

	header ?Cache-Control max-age=3600
	header @hashed_asset Cache-Control max-age=31536000

  import common
}


I built a custom Dockerfile:

FROM caddy:2.5.2-builder-alpine AS builder

RUN xcaddy build \
--with github.com/mholt/caddy-ratelimit

FROM caddy:2.5.2-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Edit: removed extra }

not enough arguments

#20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:106:87: not enough arguments in call to h.storage.Store
#20 125.1 	have (string, []byte)
#20 125.1 	want (context.Context, string, []byte)
#20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:116:54: not enough arguments in call to h.storage.List
#20 125.1 	have (string, bool)
#20 125.1 	want (context.Context, string, bool)
#20 125.1 /go/pkg/mod/github.com/mholt/[email protected]/distributed.go:132:34: not enough arguments in call to h.storage.Load
#20 125.1 	have (string)
#20 125.1 	want (context.Context, string)

Multiple routes issue

I have the following .Caddyfile :


{
  order rate_limit before basicauth
  admin off
}

(rate_limit_num_per_min) {
  rate_limit {
      zone register_limit {
      key    {http.request.remote.host}
      events {args.0}
      window {args.1}s
      }  
  }
}



localhost {

  encode zstd gzip
  reverse_proxy /*  https://www.example.com

  route  /user/login {
    import rate_limit_num_per_min 5 10
  }

  route  /user/register {
   import rate_limit_num_per_min 1 10
  }

}

However on both limited routes the same rate limit is applied (5 requests per 10 minutes) and both routes share the cooldown, which is not the desired outcome.

This is maybe the supposed to work like that (not very experienced with Caddy)?

If there is a better way to do this or even group the rate limit to cover multiple routes with same parameters but not share the limit I would appreciate help.

Cannot build using xcaddy 0.3.1

Getting Error building with new xcaddy

root@ip:/tmp# xcaddy build --with github.com/mholt/caddy-ratelimit --output ./caddy1
2022/10/03 11:16:04 [INFO] Temporary folder: /tmp/buildenv_2022-10-03-1116.2667582345
2022/10/03 11:16:04 [INFO] Writing main module: /tmp/buildenv_2022-10-03-1116.2667582345/main.go
package main

import (
        caddycmd "github.com/caddyserver/caddy/v2/cmd"

        // plug in Caddy modules here
        _ "github.com/caddyserver/caddy/v2/modules/standard"
        _ "github.com/mholt/caddy-ratelimit"
)

func main() {
        caddycmd.Main()
}
2022/10/03 11:16:04 [INFO] Initializing Go module
2022/10/03 11:16:04 [INFO] exec (timeout=10s): /usr/local/go/bin/go mod init caddy 
go: creating new go.mod: module caddy
go: to add module requirements and sums:
        go mod tidy
2022/10/03 11:16:04 [INFO] Pinning versions
2022/10/03 11:16:04 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/caddyserver/caddy/v2 
go: added github.com/beorn7/perks v1.0.1
go: added github.com/caddyserver/caddy/v2 v2.6.1
go: added github.com/caddyserver/certmagic v0.17.1
go: added github.com/cespare/xxhash/v2 v2.1.2
go: added github.com/fsnotify/fsnotify v1.5.1
go: added github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0
go: added github.com/golang/mock v1.6.0
go: added github.com/golang/protobuf v1.5.2
go: added github.com/google/uuid v1.3.0
go: added github.com/klauspost/cpuid/v2 v2.1.0
go: added github.com/libdns/libdns v0.2.1
go: added github.com/lucas-clemente/quic-go v0.28.2-0.20220813150001-9957668d4301
go: added github.com/marten-seemann/qpack v0.2.1
go: added github.com/marten-seemann/qtls-go1-18 v0.1.2
go: added github.com/marten-seemann/qtls-go1-19 v0.1.0
go: added github.com/matttproud/golang_protobuf_extensions v1.0.1
go: added github.com/mholt/acmez v1.0.4
go: added github.com/miekg/dns v1.1.50
go: added github.com/nxadm/tail v1.4.8
go: added github.com/onsi/ginkgo v1.16.4
go: added github.com/prometheus/client_golang v1.12.2
go: added github.com/prometheus/client_model v0.2.0
go: added github.com/prometheus/common v0.32.1
go: added github.com/prometheus/procfs v0.7.3
go: added go.uber.org/atomic v1.9.0
go: added go.uber.org/multierr v1.6.0
go: added go.uber.org/zap v1.21.0
go: added golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa
go: added golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e
go: added golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3
go: added golang.org/x/net v0.0.0-20220812165438-1d4ff48094d1
go: added golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10
go: added golang.org/x/term v0.0.0-20210927222741-03fcf44c2211
go: added golang.org/x/text v0.3.8-0.20211004125949-5bd84dd9b33b
go: added golang.org/x/tools v0.1.10
go: added golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
go: added google.golang.org/protobuf v1.28.0
go: added gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7
2022/10/03 11:16:06 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/mholt/caddy-ratelimit github.com/caddyserver/caddy/v2 
go: github.com/mholt/caddy-ratelimit@upgrade (v0.0.0-20220930195153-598f4b82c131) requires github.com/caddyserver/caddy/[email protected], not github.com/caddyserver/caddy/v2@upgrade (v2.6.1)
2022/10/03 11:16:06 [FATAL] exit status 1

Docker: caddy-ratelimit is not referring to the original IPv6 address.

Hi,

I use Caddy in my IPv4 Caddy network.

The problem is that the Docker NAT is transforming the IPv6 addresses to an IPv4 address, and caddy-ratelimit is referring to this IPv4 address, which is similar for all IPv6 calls. Therefore, the rate limit is not working correctly.

Is it possible to extract the original IPv6 header somehow? Perhaps using the X-Forwarded-For header or something similar?

Need guide

Hey there,

How can I enable this just for a specific route?

`
route /api/gateway/* {
# Just wanna rate limit here
# I wanna allow user to send 100 req per min

    rewrite * /graphql
    reverse_proxy http://tribe-gateway.development.svc.cluster.local
}

`
Thanks!

Storage keys from ephemeral Caddy instances are never deleted

We have deployed Caddy, along with caddy-ratelimit and caddy-storage-redis, in a containerized environment, with Redis being our only permanent storage. As a result, each time the deployment scales up, the new Caddy processes pick new random instance IDs. We've noticed that the list of rlState entries in Redis have been adding up; in one case we have four thousand. The distributed rate limiting read loop has been using more and more CPU as this has grown, and profiling shows this is mainly being spent in the Redis driver and Gob decoding.

I noticed that this plugin doesn't delete entries from storage. This storage growth could be fixed if syncDistributedRead() also deleted entries where rlState.Timestamp was very old during its scan.

In our case, it would be nice if we could make use of Redis's expiration feature to automatically delete old entries. However, the Storage interface doesn't have a place for extra metadata when storing a value, and caddy-storage-redis currently passes an expiration time of zero with all writes. Moreover, a deployment using ephemeral Caddy instances and an NFS storage backend would have the same issue with unbounded storage growth.

using with trusted_proxies / behind another proxy

Thanks for your work on this, I am looking to implement this plugin to stop spam at the caddy level and rate limiting seems to be the best thing to do.

I am having an issue whereby the limits work but they will rate limit all requests with placeholder remote_host, I believe this is because its outside of the reverse_proxy handler and so trusted_proxies does not run before rate_limit

Is there a way to accomplish this?

Different storage for rate_limit

Hello,

Is it possible to store rate_limit related information in a different storage than the default Caddy's one?

If you for example use NFS to synchronize TLS certificates and so on, it would be nice to be able to set another storage (like Redis?) for rate_limit, as the delay in NFS reads/writes can affect the performance of the rate limiter.

Thanks in advance

Response message is missing

Hi,

I successfully tested your rate limiter.
If I hit the limit, I get a 429 message. But it is messing any content.

I would like to see something like "Too Many Requests. Please try again later." in my browser.

Not sure if is possible otherwise it would be nice to add this feature in later releases.
From a user perspective this is not nice

Thank you and great work!

Issue with having multiple zones for a single key

So, I'm trying to use this plugin to limit requests to a single user in the following way: 1 req/sec and 10 req/min.

rate_limit {
  zone header_limiting_min {
    key {header.authorization}
    events 10
    window 1m
   }

  zone header_limiting_sec {
  key {header.authorization}
  events 1
  window 1s
  }
}

The issue I'm having is that the plugin seems to be counting failed requests as well so I hit the rate-limit after 10 requests whether they were successful or not.

The other plugin works without this issue but it doesn't support the retry-after header.

Is this a bug or is there a better way of achieving what I want ?

Use `RegisterDirectiveOrder` for `rate_limit` before `basic_auth`

I believe it would be a nice improvement if the Caddy HTTP Rate Limit Module leveraged the new RegisterDirectiveOrder feature from Caddy v2.8 (caddyserver/caddy#5865).

func init() {
	httpcaddyfile.RegisterHandlerDirective("rate_limit", parseCaddyfile)
+	httpcaddyfile.RegisterDirectiveOrder("rate_limit", "before", "basic_auth")
}

I would open a PR, but I am unsure if upgrading the go.mod requirements from github.com/caddyserver/caddy/v2 v2.7.6 to v2.8.4, along with all the other necessary updates after running go mod tidy, is acceptable for you and if it might break something for other users. I simply don't know enough about Go / Caddy module dependencies to assess this.

Possible to add MAX incoming connection from remote?

Hi I'm using this with the rate limit and its fantastic, however I checked to see if I could find any relation to allowing max active connection from a client/remote and did not succeed.

Would this be something you would consider adding to this or any other project in caddy?

howto set consul storage

I am trying to get the distributed config working.

Currently we have our storage on consul via: https://github.com/pteich/caddy-tlsconsul.

Config looks like this for now:

{
  "handler": "rate_limit",
  "rate_limits": {
    "msft_scanners": {
      "match": [
        {
          "remote_ip": {
            "ranges": [
              "10.10.10.1/24"
            ]
          }
        }
      ], 
      "key": "msft",
      "window": "1m",
      "max_events": 2
    } 
  },
  "distributed": {
    "write_interval": "30s",
    "read_interval": "10s"
  }
}

On start i get an error regarding uuid:

run: loading initial config: loading new config: loading http app module: provision http: server nzm: setting up route handlers: route 0: loading handler modules: position 0: loading module 'rate_limit': provision http.handlers.rate_limit: open /etc/caddyserver/.local/share/caddy/instance.uuid: no such file or directory

Log rate limited IPs

I want to log which IP's are rate limited with date and stuff. Is it already possible now?

Can't build 2.5.0 with caddy-ratelimit due to quic errors

Made a clean container from the image golang:1.18.1-alpine3.15 and added xcaddy 0.3.0 to it. Then I tried to run the following command:

xcaddy build \
         --with github.com/mholt/caddy-ratelimit \
         --output caddy

It errors out with the following:

2022/04/28 04:21:52 [INFO] exec (timeout=0s): /usr/local/go/bin/go build -o /go/caddy -ldflags -w -s -trimpath
# github.com/caddyserver/caddy/v2
/go/pkg/mod/github.com/caddyserver/caddy/[email protected]/listeners.go:187:68: undefined: quic.EarlySession
2022/04/28 04:24:07 [INFO] Cleaning up temporary folder: /tmp/buildenv_2022-04-28-0421.740480886
2022/04/28 04:24:07 [FATAL] exit status 2

The complete output of this command can be found here.

One thing I notice looking through it is that it appears getting this package forces an upgrade of github.com/lucas-clemente/quic-go to v0.27.0. And I don't see a downgrade back to v0.26.0 later as this line suggests should happen. I'm not sure why that would be but I know from caddyserver/xcaddy#99 that Caddy v2.5.0 is not compatible with quic v0.27.0 so that seems to be the problem.

This module is making Caddy Consume around 4.8GB of Memory

Hello, I am opening this issue after having a lot of problems recently with rate-limiting using this module on Caddy.

The main problem that I am having is a spike in memory usage of caddy. To replicate the issue, consider the below:

1- I am using Caddy version 2.7.6-alpine (Docker Image) Which I am building using the below Dockerfile.

FROM caddy:2.7.6-builder-alpine AS builder

RUN xcaddy build --with github.com/mholt/caddy-ratelimit

FROM caddy:2.7.6-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy
  1. I am using this image in a docker-compose setup:
caddy:
    image: ghcr.io/myrepo/my-caddy-build:latest
    container_name: caddy-reverse-proxy
    restart: unless-stopped
    networks:
      - giveth
    ports:
      - 80:80
      - 443:443
    env_file:
      - ../.env
    environment:
      MY_APP_URL: ${MY_APP_URL:-}
      RESTRICTED_PATHS: ${RESTRICTED_PATHS:-}
      IP_WHITELIST: ${IP_WHITELIST:-}
      WHITELIST_RATE_EVENTS: ${IG_WHITELIST_RATE_EVENTS:-}
      WHITELIST_RATE_INTERVAL: ${IG_WHITELIST_RATE_INTERVAL:-}
      PUBLIC_RATE_EVENTS: ${IG_PUBLIC_RATE_EVENTS:-}
      PUBLIC_RATE_INTERVAL: ${IG_PUBLIC_RATE_INTERVAL:-}
      DOMAIN_WHITELIST: ${DOMAIN_WHITELIST:-}
    volumes:
      - caddy_data:/data
      - caddy_config:/config
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ../logs/caddy:/usr/src/app/
    depends_on:
      - postgres
  1. My comprehensive Caddyfile config is the below:
# Global Options
{
	order rate_limit before basicauth
	log global {
		output file /usr/src/app/global.log
		format json
		level debug
	}
}

# CORS Config Block Directive
(cors) {
    @cors_preflight {
        method OPTIONS
    }
    @corsOrigin {
        header_regexp Origin ^https?://([a-zA-Z0-9-]+\.)*vercel\.app$|^https?://localhost(:[0-9]+)?$|^https?://({$DOMAIN_WHITELIST})$
    }

    handle @cors_preflight {
        header {
            Access-Control-Allow-Origin "{http.request.header.Origin}"
            Access-Control-Allow-Credentials true
            Access-Control-Allow-Headers "*"
            Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE"
            Access-Control-Max-Age "3600"
            Vary Origin
            defer
        }
        respond "" 204
    }

    handle @corsOrigin {
        header {
            Access-Control-Allow-Origin "{http.request.header.Origin}"
            Access-Control-Allow-Credentials true
            Access-Control-Expose-Headers "*"
            Vary Origin
            defer
        }
    }
}

#--------------------------------------------------------------------------
# My App site block
#--------------------------------------------------------------------------
{$APP_URL} {
    # Call the cors for whitelisted domains
    import cors
    
    # Configure Logging
    log {
        output file /usr/src/app/access.log
        format json
    }
    
    # Identify Config Keys for accesses
	@privateIPAccess remote_ip {$IP_WHITELIST} # Whitelisted IP Addresses
	@publicIPAccess not remote_ip {$IP_WHITELIST} # Unwhitelisted IP Addresses
	@restrictedPaths path {$RESTRICTED_PATHS} # Restricted Paths
	@unRestrictedPaths not path {$RESTRICTED_PATHS} # Unrestricted Paths

    # Handling Restricted Paths Routes
	route @restrictedPaths {
	  respond @publicIPAccess 403
	  reverse_proxy my-app:3000 {
        transport http {
            response_header_timeout 300s
            dial_timeout 300s
        }
      }
	}

    # Handling Unrestricted Paths Route
	route @unRestrictedPaths {
        reverse_proxy my-app:3000 {
            transport http {
                response_header_timeout 300s
                dial_timeout 300s
            }
        }
        rate_limit @privateIPAccess {
            zone myzone {
                key    {remote_host}
                events 300
                window 1m
            }
            sweep_interval 1m
        }
	}

	# Apply Global rate limiting to all public Requests
	rate_limit @publicIPAccess {
        zone myzone {
            key    {remote_host}
            events 100
            window 1m
        }
        sweep_interval 1m
    }

    ## Extra Header Configs
    header {
        Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        X-Frame-Options "DENY"
    }

    ## Request Body Size
    request_body {
    max_size 30MB
    }
}

The main Idea here is that I am using Caddy as a reverse proxy and from that reverse proxy, I'm handling the rest of the app functions, like rate limiting.

Below is the heap when caddy was consuming a lot of memory.
caddy_heap2.txt
I ran this with a quick scan by ChatGPT and received the below:

The analysis of your new heap profile sample reveals significant memory consumption in certain areas, notably around rate limiting and compression. Here's a summary of the key findings:

Rate Limiting
A substantial portion of memory (approximately 4.8 GB across various entries) is being consumed by operations related to rate limiting, specifically within github.com/mholt/caddy-ratelimit. The newRingBufferRateLimiter function appears to be a primary contributor. This suggests that the rate limiting configuration or its implementation within Caddy is causing significant memory allocation. This could be due to a large number of unique clients, high traffic volume leading to many rate limiter instances, or the configuration parameters for the rate limit (size of the ring buffer, rate limits).

Compression (Zstandard)
Another notable area of memory usage is related to compression operations, particularly with Zstandard (github.com/klauspost/compress/zstd). The allocations here are related to the encoder's operations, which are expected when responses are being compressed. If your traffic pattern involves serving a lot of compressible data, this could explain the memory usage. However, the amount of memory dedicated to compression should generally be less concerning unless it's disproportionately high compared to the nature of your traffic.

I think there is a problem, or more specifically memory leak issue, with this module. But I can appreciate further help to investigate this. @mholt

I wish I provided enough information, and you can support me with this, as I would love this rate limit module for Caddy to be more stable than this. As I cannot run it in my production environment anymore :(

Websocket Support

I have an application that does authentication steps over HTTP to establish a websocket, and then everything else is over the websocket.

I plan on using this rate limiter for auth, but by virtue of a websocket being a direct connection, there's no way for this to work with it, and I'd have to roll my own application-level rate limiting, correct?

Or would Caddy be able to tell if a user's origin is sending too many requests?

I can foresee an architecture with a sidecar posting websocket info back to Caddy asynchronously, but seems like a niche use case.

Let me know if I'm way off base here, thank you!

Build fail with official caddy builder dockerfile

Hi !

Since the commit of yesterday the build based on the official docker builder fails. The error is quite explicit :

github.com/mholt/[email protected] requires go >= 1.22.0 (running go 1.21.10; GOTOOLCHAIN=local)

It would be nice to make version tags in the git repository. Currently it is not easy to fallback to the previous version.
Here is a dockerfile for complete reproduction :

FROM caddy:2-builder AS builder

RUN xcaddy build \
    --with github.com/greenpau/caddy-security \
    --with github.com/mholt/caddy-ratelimit

FROM caddy:2-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

PS : It's a nice occasion to say that using Caddy and all this plugins is really great and thank all the maintainers and developers.

Dynamic zone key for network block of {http.request.remote.host} with certain prefix

When used as the key of a dynamic zone, can {http.request.remote.host} be reduced to its network block for a certain prefix?

Assuming http.request.remote.host is 1.2.3.4, and a function is used reducing it to /24, requests (and rate-limiting) to any address in the range 1.2.3.0-255 would be grouped.

"key": "reduce_to_network_block({http.request.remote.host}, '/24')",

Plugin does not honor the header directive

In our caddy file we remove the server header with -Server. However when plugin returns a 429 error, it also adds back the Server header not respecting the config. How can we prevent this?

{
    order rate_limit before basicauth
}
:8443 {
    tls /etc/ssl/my.crt /etc/ssl/my.key
    header {
        -Server
        Strict-Transport-Security max-age=31536000;
        X-Content-Type-Options nosniff
        X-Frame-Options DENY
        Referrer-Policy no-referrer-when-downgrade
        X-XSS-Protection "1; mode=block"
    }
    encode gzip
    log {
        output discard
    }
    reverse_proxy http://my-api:8080
    rate_limit {
            distributed
            zone static_example {
                key    static
                events 5
                window 1m
            }
    }
}

Documentation: Recommend against multiple rate_limit handlers with identical distributed storage configuration

We ran into an issue when using this handler in a more complex configuration, and I think it might be worth mentioning in the distributed rate limiting documentation, or adding an additional example configuration file. We had previously instantiated multiple rate_limit handlers, each in a separate handler chain for various routes. We also had one top-level storage module definition (using Redis). When I snooped on Redis traffic, I noticed that there were multiple goroutines reading and writing storage at the same time, using the same keys (based on the instance ID). I think this meant that different rate_limit handlers were clobbering each others' state, and reading back state from other instances that may correspond to a different route and rate limit handler.

We fixed this by instantiating the rate_limit handler once as a named route, invoking the named route from each route's handler chain, and using variables to link together which route handled the request and which rate limiting zone was used. Alternatively, the problem could be avoided by using different storage configuration in each rate_limit handler, and setting key_prefix to something different in each, for example.

Configuration for caddy-docker-proxy

Hi,

is it possible to run caddy-ratelimit with caddy-docker-proxy.
I currently tried this approach with no success:

I want a global rate limit for all my docker containers. So my idea was it add it to my Caddyfile with no success.
Did anyone has success to use it with caddy-docker-proxy?

Rate limit based on path, but doesn't work

Hi, I wanna do rate limit based on the host + path, but the config below didn't work.

{
        order rate_limit before basicauth
}

https://example.com {
        rate_limit {
                distributed
                zone dynamic_example {
                        key {remote_host}
                        events 1000
                        window 60s
                }
                zone pair_api {
                       match {
                               path /api/auth/pair
                       }
                       key {remote_host}/api/auth/pair
                       events 1
                       window 10s
                }
        }
}
zone pair_api {
       match {
               path /api/auth/pair
       }
       key {remote_host}/api/auth/pair
       events 1
       window 10s
}

I followed this instruction

Each zone may optionally filter the requests it applies to by specifying request matchers.

Anything wrong about this snippet? Thanks in advance

Possibility to add exceptions/whitelist

Hi there!
First of all: thank you for creating Caddy and this plugin, I love both! Great work!!
Here's my questions:
Is it possible to add an exception for specific IP addresses or subnets to bypass the rate-limit?
Maybe this is already possible with how Caddy works, but I haven't been able to figure it out yet.
For example, I'd love to whitelist my internal network, so my own servers don't run into the ratelimit.
My rate limit configuration currently looks like this:

rate_limit {
        zone my_zone {
                key    {remote_host}
                events 10
                window 20s
        }
}

Thank you very much!
I appreciate any help!

I am sure that rate limit not working well

Screenshot 2023-09-09 122205
I have nextjs app behind caddy server and I have DDOS attacks.
I've setup rate limit in caddy file.

rate_limit {
    distributed
    zone ip_rate {
        key    {remote_host}
        events 250
        window 300s
    }
    zone ip_rate_min {
        key    {remote_host}
        events 70
        window 100s
    }
}

And in nextjs app I made app level rate limiter

    const ip = headers["x-forwarded-for"] as string;
    try {
      await limiter.check(70, ip);
    } catch (e) {
      console.log("will block ip", ip);
      appContext.ctx.req?.destroy();
    }

And I still get over than 100 lines in log

Stuck mutex causing resource leak

We're running into an intermittent problem where one of our caddy replicas goes runaway in memory usage. This is ~50MB baseline to growing over 1GB over time, before eventually failing to service liveness probe requests and being killed by our process orchestrator.

It appears to be a resource leak, looking at the output of lsof on an affected replica reveals that there are several thousand open sockets.

Some inspection of the pprof output reveals that there looks to be a stuck mutex blocking a bunch of other ones.

Goroutine stack dump:
goroutine profile: total 22880
22375 @ 0x43e44e 0x44f938 0x44f90f 0x46d785 0x48d1dd 0x1779949 0x177992b 0x1776fda 0x177884c 0x121ea4c 0x1225d5a 0x120aa69 0x1225ae9 0x120aa69 0x12281a8 0x1201755 0x120aa69 0x122732e 0x76d12e 0x769014 0x471901
#	0x46d784	sync.runtime_SemacquireMutex+0x24										runtime/sema.go:77
#	0x48d1dc	sync.(*Mutex).lockSlow+0x15c											sync/mutex.go:171
#	0x1779948	sync.(*Mutex).Lock+0x48												sync/mutex.go:90
#	0x177992a	github.com/mholt/caddy-ratelimit.(*ringBufferRateLimiter).MaxEvents+0x2a					github.com/mholt/[email protected]/ringbuffer.go:102
#	0x1776fd9	github.com/mholt/caddy-ratelimit.Handler.distributedRateLimiting+0x79						github.com/mholt/[email protected]/distributed.go:193
#	0x177884b	github.com/mholt/caddy-ratelimit.Handler.ServeHTTP+0x2ab							github.com/mholt/[email protected]/handler.go:191
#	0x121ea4b	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*metricsInstrumentedHandler).ServeHTTP+0x54b			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/metrics.go:137
#	0x1225d59	github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapMiddleware.func1.1+0x39					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:331
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x1225ae8	github.com/caddyserver/caddy/v2/modules/caddyhttp.RouteList.Compile.wrapRoute.func1.1+0x2c8			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:300
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x12281a7	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).enforcementHandler+0x247				github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:429
#	0x1201754	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*App).Provision.(*Server).wrapPrimaryRoute.func1+0x34	github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:405
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x122732d	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).ServeHTTP+0xf2d					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:341
#	0x76d12d	net/http.serverHandler.ServeHTTP+0x8d										net/http/server.go:2938
#	0x769013	net/http.(*conn).serve+0x5f3											net/http/server.go:2009

377 @ 0x43e44e 0x436c97 0x46bce5 0x4e11c7 0x4e24ba 0x4e24a8 0x545825 0x558ea5 0x7632cb 0x56d2e3 0x56d413 0x76917c 0x471901
#	0x46bce4	internal/poll.runtime_pollWait+0x84		runtime/netpoll.go:343
#	0x4e11c6	internal/poll.(*pollDesc).wait+0x26		internal/poll/fd_poll_runtime.go:84
#	0x4e24b9	internal/poll.(*pollDesc).waitRead+0x279	internal/poll/fd_poll_runtime.go:89
#	0x4e24a7	internal/poll.(*FD).Read+0x267			internal/poll/fd_unix.go:164
#	0x545824	net.(*netFD).Read+0x24				net/fd_posix.go:55
#	0x558ea4	net.(*conn).Read+0x44				net/net.go:179
#	0x7632ca	net/http.(*connReader).Read+0x14a		net/http/server.go:791
#	0x56d2e2	bufio.(*Reader).fill+0x102			bufio/bufio.go:113
#	0x56d412	bufio.(*Reader).Peek+0x52			bufio/bufio.go:151
#	0x76917b	net/http.(*conn).serve+0x75b			net/http/server.go:2044

40 @ 0x43e44e 0x436c97 0x46bce5 0x4e11c7 0x4e24ba 0x4e24a8 0x545825 0x558ea5 0x78222a 0x56d2e3 0x56d413 0x783019 0x471901
#	0x46bce4	internal/poll.runtime_pollWait+0x84		runtime/netpoll.go:343
#	0x4e11c6	internal/poll.(*pollDesc).wait+0x26		internal/poll/fd_poll_runtime.go:84
#	0x4e24b9	internal/poll.(*pollDesc).waitRead+0x279	internal/poll/fd_poll_runtime.go:89
#	0x4e24a7	internal/poll.(*FD).Read+0x267			internal/poll/fd_unix.go:164
#	0x545824	net.(*netFD).Read+0x24				net/fd_posix.go:55
#	0x558ea4	net.(*conn).Read+0x44				net/net.go:179
#	0x782229	net/http.(*persistConn).Read+0x49		net/http/transport.go:1954
#	0x56d2e2	bufio.(*Reader).fill+0x102			bufio/bufio.go:113
#	0x56d412	bufio.(*Reader).Peek+0x52			bufio/bufio.go:151
#	0x783018	net/http.(*persistConn).readLoop+0x1b8		net/http/transport.go:2118

40 @ 0x43e44e 0x44e905 0x7849e5 0x471901
#	0x7849e4	net/http.(*persistConn).writeLoop+0xe4	net/http/transport.go:2421

11 @ 0x43e44e 0x436c97 0x46bce5 0x4e11c7 0x4e24ba 0x4e24a8 0x545825 0x558ea5 0x762e97 0x471901
#	0x46bce4	internal/poll.runtime_pollWait+0x84		runtime/netpoll.go:343
#	0x4e11c6	internal/poll.(*pollDesc).wait+0x26		internal/poll/fd_poll_runtime.go:84
#	0x4e24b9	internal/poll.(*pollDesc).waitRead+0x279	internal/poll/fd_poll_runtime.go:89
#	0x4e24a7	internal/poll.(*FD).Read+0x267			internal/poll/fd_unix.go:164
#	0x545824	net.(*netFD).Read+0x24				net/fd_posix.go:55
#	0x558ea4	net.(*conn).Read+0x44				net/net.go:179
#	0x762e96	net/http.(*connReader).backgroundRead+0x36	net/http/server.go:683

10 @ 0x43e44e 0x44e905 0x785a19 0x779d7a 0x149623a 0x14961f3 0x149c044 0x149a27a 0x14991b3 0x121ea4c 0x1225d5a 0x120aa69 0x17788e3 0x121ea4c 0x1225d5a 0x120aa69 0x1225ae9 0x120aa69 0x12281a8 0x1201755 0x120aa69 0x122732e 0x76d12e 0x769014 0x471901
#	0x785a18	net/http.(*persistConn).roundTrip+0x978										net/http/transport.go:2652
#	0x779d79	net/http.(*Transport).roundTrip+0x799										net/http/transport.go:604
#	0x1496239	net/http.(*Transport).RoundTrip+0x139										net/http/roundtrip.go:17
#	0x14961f2	github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*HTTPTransport).RoundTrip+0xf2			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/reverseproxy/httptransport.go:388
#	0x149c043	github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*Handler).reverseProxy+0x5e3			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/reverseproxy/reverseproxy.go:788
#	0x149a279	github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*Handler).proxyLoopIteration+0xe39		github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/reverseproxy/reverseproxy.go:536
#	0x14991b2	github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*Handler).ServeHTTP+0x3d2			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/reverseproxy/reverseproxy.go:443
#	0x121ea4b	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*metricsInstrumentedHandler).ServeHTTP+0x54b			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/metrics.go:137
#	0x1225d59	github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapMiddleware.func1.1+0x39					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:331
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x17788e2	github.com/mholt/caddy-ratelimit.Handler.ServeHTTP+0x342							github.com/mholt/[email protected]/handler.go:197
#	0x121ea4b	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*metricsInstrumentedHandler).ServeHTTP+0x54b			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/metrics.go:137
#	0x1225d59	github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapMiddleware.func1.1+0x39					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:331
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x1225ae8	github.com/caddyserver/caddy/v2/modules/caddyhttp.RouteList.Compile.wrapRoute.func1.1+0x2c8			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:300
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x12281a7	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).enforcementHandler+0x247				github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:429
#	0x1201754	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*App).Provision.(*Server).wrapPrimaryRoute.func1+0x34	github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:405
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x122732d	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).ServeHTTP+0xf2d					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:341
#	0x76d12d	net/http.serverHandler.ServeHTTP+0x8d										net/http/server.go:2938
#	0x769013	net/http.(*conn).serve+0x5f3											net/http/server.go:2009

5 @ 0x43e44e 0x44e905 0x149780a 0x471901
#	0x1497809	github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*metricsUpstreamsHealthyUpdater).Init.func1+0xc9	github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/reverseproxy/metrics.go:61

4 @ 0x43e44e 0x44f938 0x44f90f 0x46d785 0x48d1dd 0x1779eb6 0x1779e87 0x17760ef 0x48ccc8 0x1776028 0x1775edd 0xa8d623 0x1775dd0 0x1775928 0x471901
#	0x46d784	sync.runtime_SemacquireMutex+0x24						runtime/sema.go:77
#	0x48d1dc	sync.(*Mutex).lockSlow+0x15c							sync/mutex.go:171
#	0x1779eb5	sync.(*Mutex).Lock+0x75								sync/mutex.go:90
#	0x1779e86	github.com/mholt/caddy-ratelimit.(*ringBufferRateLimiter).Count+0x46		github.com/mholt/[email protected]/ringbuffer.go:178
#	0x17760ee	github.com/mholt/caddy-ratelimit.rlStateForZone.func1+0x6e			github.com/mholt/[email protected]/distributed.go:113
#	0x48ccc7	sync.(*Map).Range+0x227								sync/map.go:476
#	0x1776027	github.com/mholt/caddy-ratelimit.rlStateForZone+0x87				github.com/mholt/[email protected]/distributed.go:107
#	0x1775edc	github.com/mholt/caddy-ratelimit.Handler.syncDistributedWrite.func1+0x7c	github.com/mholt/[email protected]/distributed.go:95
#	0xa8d622	github.com/caddyserver/caddy/v2.(*UsagePool).Range+0x1a2			github.com/caddyserver/caddy/[email protected]/usagepool.go:158
#	0x1775dcf	github.com/mholt/caddy-ratelimit.Handler.syncDistributedWrite+0xaf		github.com/mholt/[email protected]/distributed.go:91
#	0x1775927	github.com/mholt/caddy-ratelimit.Handler.syncDistributed+0x1e7			github.com/mholt/[email protected]/distributed.go:72

4 @ 0x43e44e 0x44f938 0x44f90f 0x46d785 0x48d1dd 0x177a2df 0x177a2bc 0x48ccc8 0x177a209 0xa8d623 0x1778f6c 0x471901
#	0x46d784	sync.runtime_SemacquireMutex+0x24					runtime/sema.go:77
#	0x48d1dc	sync.(*Mutex).lockSlow+0x15c						sync/mutex.go:171
#	0x177a2de	sync.(*Mutex).Lock+0x7e							sync/mutex.go:90
#	0x177a2bb	github.com/mholt/caddy-ratelimit.Handler.sweepRateLimiters.func1.1+0x5b	github.com/mholt/[email protected]/handler.go:250
#	0x48ccc7	sync.(*Map).Range+0x227							sync/map.go:476
#	0x177a208	github.com/mholt/caddy-ratelimit.Handler.sweepRateLimiters.func1+0x48	github.com/mholt/[email protected]/handler.go:244
#	0xa8d622	github.com/caddyserver/caddy/v2.(*UsagePool).Range+0x1a2		github.com/caddyserver/caddy/[email protected]/usagepool.go:158
#	0x1778f6b	github.com/mholt/caddy-ratelimit.Handler.sweepRateLimiters+0x8b		github.com/mholt/[email protected]/handler.go:240

2 @ 0x43e44e 0x436c97 0x46bce5 0x4e11c7 0x4e66ac 0x4e669a 0x547849 0x56183e 0x5609f0 0x76d584 0x471901
#	0x46bce4	internal/poll.runtime_pollWait+0x84		runtime/netpoll.go:343
#	0x4e11c6	internal/poll.(*pollDesc).wait+0x26		internal/poll/fd_poll_runtime.go:84
#	0x4e66ab	internal/poll.(*pollDesc).waitRead+0x2ab	internal/poll/fd_poll_runtime.go:89
#	0x4e6699	internal/poll.(*FD).Accept+0x299		internal/poll/fd_unix.go:611
#	0x547848	net.(*netFD).accept+0x28			net/fd_unix.go:172
#	0x56183d	net.(*TCPListener).accept+0x1d			net/tcpsock_posix.go:152
#	0x5609ef	net.(*TCPListener).Accept+0x2f			net/tcpsock.go:315
#	0x76d583	net/http.(*Server).Serve+0x363			net/http/server.go:3056

1 @ 0x40f0e9 0x46de29 0xa6e2b3 0x471901
#	0x46de28	os/signal.signal_recv+0x28	runtime/sigqueue.go:152
#	0xa6e2b2	os/signal.loop+0x12		os/signal/signal_unix.go:23

1 @ 0x4332d1 0x46b8bd 0x7d2fb1 0x7d2de5 0x7cf886 0x7e0fc8 0x7e1ac5 0x76a449 0xa6f26f 0x76a449 0x76bd62 0xa73a3f 0xa7345e 0x76d12e 0x769014 0x471901
#	0x46b8bc	runtime/pprof.runtime_goroutineProfileWithLabels+0x1c								runtime/mprof.go:844
#	0x7d2fb0	runtime/pprof.writeRuntimeProfile+0xb0										runtime/pprof/pprof.go:734
#	0x7d2de4	runtime/pprof.writeGoroutine+0x44										runtime/pprof/pprof.go:694
#	0x7cf885	runtime/pprof.(*Profile).WriteTo+0x145										runtime/pprof/pprof.go:329
#	0x7e0fc7	net/http/pprof.handler.ServeHTTP+0x4a7										net/http/pprof/pprof.go:267
#	0x7e1ac4	net/http/pprof.Index+0xe4											net/http/pprof/pprof.go:384
#	0x76a448	net/http.HandlerFunc.ServeHTTP+0x28										net/http/server.go:2136
#	0xa6f26e	github.com/caddyserver/caddy/v2.(*AdminConfig).newAdminHandler.func1.instrumentHandlerCounter.func1+0x6e	github.com/caddyserver/caddy/[email protected]/metrics.go:47
#	0x76a448	net/http.HandlerFunc.ServeHTTP+0x28										net/http/server.go:2136
#	0x76bd61	net/http.(*ServeMux).ServeHTTP+0x141										net/http/server.go:2514
#	0xa73a3e	github.com/caddyserver/caddy/v2.adminHandler.serveHTTP+0x55e							github.com/caddyserver/caddy/[email protected]/admin.go:837
#	0xa7345d	github.com/caddyserver/caddy/v2.adminHandler.ServeHTTP+0x7dd							github.com/caddyserver/caddy/[email protected]/admin.go:789
#	0x76d12d	net/http.serverHandler.ServeHTTP+0x8d										net/http/server.go:2938
#	0x769013	net/http.(*conn).serve+0x5f3											net/http/server.go:2009

1 @ 0x43e44e 0x4099ad 0x4095b2 0xa8e218 0x471901
#	0xa8e217	github.com/caddyserver/caddy/v2.trapSignalsCrossPlatform.func1+0xd7	github.com/caddyserver/caddy/[email protected]/sigtrap.go:43

1 @ 0x43e44e 0x4099ad 0x4095d2 0xa8dada 0x471901
#	0xa8dad9	github.com/caddyserver/caddy/v2.trapSignalsPosix.func1+0xf9	github.com/caddyserver/caddy/[email protected]/sigtrap_posix.go:35

1 @ 0x43e44e 0x4099ad 0x4095d2 0xb0adb0 0x471901
#	0xb0adaf	github.com/caddyserver/caddy/v2/cmd.watchConfigFile+0x2af	github.com/caddyserver/caddy/[email protected]/cmd/main.go:220

1 @ 0x43e44e 0x436c97 0x46bce5 0x4e11c7 0x4e66ac 0x4e669a 0x547849 0x56183e 0x5609f0 0x76d584 0xa70828 0x471901
#	0x46bce4	internal/poll.runtime_pollWait+0x84					runtime/netpoll.go:343
#	0x4e11c6	internal/poll.(*pollDesc).wait+0x26					internal/poll/fd_poll_runtime.go:84
#	0x4e66ab	internal/poll.(*pollDesc).waitRead+0x2ab				internal/poll/fd_poll_runtime.go:89
#	0x4e6699	internal/poll.(*FD).Accept+0x299					internal/poll/fd_unix.go:611
#	0x547848	net.(*netFD).accept+0x28						net/fd_unix.go:172
#	0x56183d	net.(*TCPListener).accept+0x1d						net/tcpsock_posix.go:152
#	0x5609ef	net.(*TCPListener).Accept+0x2f						net/tcpsock.go:315
#	0x76d583	net/http.(*Server).Serve+0x363						net/http/server.go:3056
#	0xa70827	github.com/caddyserver/caddy/v2.replaceLocalAdminServer.func2+0xc7	github.com/caddyserver/caddy/[email protected]/admin.go:449

1 @ 0x43e44e 0x44e1c6 0xb0517d 0xb1178f 0x5c283c 0x5c3065 0xb099fb 0xb099f0 0x177a6cf 0x43dfdb 0x471901
#	0xb0517c	github.com/caddyserver/caddy/v2/cmd.cmdRun+0xc1c					github.com/caddyserver/caddy/[email protected]/cmd/commandfuncs.go:283
#	0xb1178e	github.com/caddyserver/caddy/v2/cmd.init.1.func2.WrapCommandFuncForCobra.func1+0x2e	github.com/caddyserver/caddy/[email protected]/cmd/cobra.go:137
#	0x5c283b	github.com/spf13/cobra.(*Command).execute+0x87b						github.com/spf13/[email protected]/command.go:940
#	0x5c3064	github.com/spf13/cobra.(*Command).ExecuteC+0x3a4					github.com/spf13/[email protected]/command.go:1068
#	0xb099fa	github.com/spf13/cobra.(*Command).Execute+0x5a						github.com/spf13/[email protected]/command.go:992
#	0xb099ef	github.com/caddyserver/caddy/v2/cmd.Main+0x4f						github.com/caddyserver/caddy/[email protected]/cmd/main.go:66
#	0x177a6ce	main.main+0xe										caddy/main.go:14
#	0x43dfda	runtime.main+0x2ba									runtime/proc.go:267

1 @ 0x43e44e 0x44e905 0x1051693 0x471901
#	0x1051692	github.com/caddyserver/caddy/v2/modules/caddytls.(*TLS).keepStorageClean.func1+0x92	github.com/caddyserver/caddy/[email protected]/modules/caddytls/tls.go:540

1 @ 0x43e44e 0x44e905 0x8e4425 0x471901
#	0x8e4424	github.com/caddyserver/certmagic.(*Cache).maintainAssets+0x304	github.com/caddyserver/[email protected]/maintain.go:69

1 @ 0x43e44e 0x44e905 0x8eeee6 0x8ee8ab 0x471901
#	0x8eeee5	github.com/caddyserver/certmagic.(*RingBufferRateLimiter).permit+0x85	github.com/caddyserver/[email protected]/ratelimiter.go:217
#	0x8ee8aa	github.com/caddyserver/certmagic.(*RingBufferRateLimiter).loop+0x8a	github.com/caddyserver/[email protected]/ratelimiter.go:89

1 @ 0x43e44e 0x44e905 0xda5ad9 0x471901
#	0xda5ad8	github.com/golang/glog.(*fileSink).flushDaemon+0xb8	github.com/golang/[email protected]/glog_file.go:351

1 @ 0x43e44e 0x44f938 0x44f90f 0x46d785 0x48d1dd 0x177940c 0x17793ec 0x17787d4 0x121ea4c 0x1225d5a 0x120aa69 0x1225ae9 0x120aa69 0x12281a8 0x1201755 0x120aa69 0x122732e 0x76d12e 0x769014 0x471901
#	0x46d784	sync.runtime_SemacquireMutex+0x24										runtime/sema.go:77
#	0x48d1dc	sync.(*Mutex).lockSlow+0x15c											sync/mutex.go:171
#	0x177940b	sync.(*Mutex).Lock+0x4b												sync/mutex.go:90
#	0x17793eb	github.com/mholt/caddy-ratelimit.(*ringBufferRateLimiter).initialize+0x2b					github.com/mholt/[email protected]/ringbuffer.go:38
#	0x17787d3	github.com/mholt/caddy-ratelimit.Handler.ServeHTTP+0x233							github.com/mholt/[email protected]/handler.go:181
#	0x121ea4b	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*metricsInstrumentedHandler).ServeHTTP+0x54b			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/metrics.go:137
#	0x1225d59	github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapMiddleware.func1.1+0x39					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:331
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x1225ae8	github.com/caddyserver/caddy/v2/modules/caddyhttp.RouteList.Compile.wrapRoute.func1.1+0x2c8			github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/routes.go:300
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x12281a7	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).enforcementHandler+0x247				github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:429
#	0x1201754	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*App).Provision.(*Server).wrapPrimaryRoute.func1+0x34	github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:405
#	0x120aa68	github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP+0x28					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/caddyhttp.go:58
#	0x122732d	github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).ServeHTTP+0xf2d					github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:341
#	0x76d12d	net/http.serverHandler.ServeHTTP+0x8d										net/http/server.go:2938
#	0x769013	net/http.(*conn).serve+0x5f3											net/http/server.go:2009

This manifests suddenly (ostensibly randomly) in the course of normal operation:
image

I don't see any pertinent log messages related to this, only normal access logs and a warning when the process is killed.

Caddy is built like so:

FROM caddy:2.7.6-builder-alpine AS builder

RUN xcaddy build \
    # https://github.com/caddyserver/caddy/pull/5979
    --with github.com/caddyserver/caddy/v2=github.com/divviup/caddy/[email protected] \
    --with github.com/pberkel/[email protected] \
    # https://github.com/mholt/caddy-ratelimit/pull/34
    --with github.com/mholt/caddy-ratelimit=github.com/divviup/caddy-ratelimit@93aba685422d0e3efcb10e15ac9c6c3d1766db8b

FROM caddy:2.7.6-alpine

ARG GIT_REVISION=unknown
LABEL revision ${GIT_REVISION}
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

(Apologies, we are using forks to get some additional functionality/fixes. I didn't notice any pertinent changes in upstream that would affect this issue).

The OS is:

/srv # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.5
PRETTY_NAME="Alpine Linux v3.18"
...
Our caddy config:
{
  "admin": {
    "config": {
      "persist": false
    }
  },
  "apps": {
    "http": {
      "servers": {
        "metrics": {
          "listen": [
            ":9465"
          ],
          "metrics": {},
          "routes": [
            {
              "handle": [
                {
                  "handler": "metrics"
                }
              ],
              "match": [
                {
                  "path": [
                    "/metrics"
                  ]
                }
              ]
            }
          ]
        },
        "rate_limiter": {
          "listen": [
            ":80"
          ],
          "logs": {},
          "metrics": {},
          "routes": [
            {
              "handle": [
                {
                  "distributed": {
                    "read_interval": "5s",
                    "write_interval": "5s"
                  },
                  "handler": "rate_limit",
                  "rate_limits": {
                    "report-upload": {
                      "key": "{http.regexp.report-upload.1}",
                      "max_events": 15000,
                      "window": "15s"
                    }
                  }
                },
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": ":8080"
                    }
                  ]
                }
              ],
              "match": [
                {
                  "method": [
                    "PUT"
                  ],
                  "path_regexp": {
                    "name": "report-upload",
                    "pattern": "/tasks/([0-9A-Za-z_-]{43})/reports"
                  }
                }
              ]
            },
            {
              "handle": [
                {
                  "distributed": {
                    "read_interval": "5s",
                    "write_interval": "5s"
                  },
                  "handler": "rate_limit",
                  "rate_limits": {
                    "aggregation-job": {
                      "key": "{http.regexp.aggregation-job.1}",
                      "max_events": 1500,
                      "window": "15s"
                    }
                  }
                },
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": ":8080"
                    }
                  ]
                }
              ],
              "match": [
                {
                  "method": [
                    "PUT",
                    "POST"
                  ],
                  "path_regexp": {
                    "name": "aggregation-job",
                    "pattern": "/tasks/([0-9A-Za-z_-]{43})/aggregation_jobs/[0-9A-Za-z_-]{22}"
                  }
                }
              ]
            },
            {
              "handle": [
                {
                  "distributed": {
                    "read_interval": "5s",
                    "write_interval": "5s"
                  },
                  "handler": "rate_limit",
                  "rate_limits": {
                    "collection-job": {
                      "key": "{http.regexp.collection-job.1}",
                      "max_events": 1500,
                      "window": "15s"
                    }
                  }
                },
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": ":8080"
                    }
                  ]
                }
              ],
              "match": [
                {
                  "method": [
                    "PUT",
                    "POST",
                    "DELETE"
                  ],
                  "path_regexp": {
                    "name": "collection-job",
                    "pattern": "/tasks/([0-9A-Za-z_-]{43})/collection_jobs/[0-9A-Za-z_-]{22}"
                  }
                }
              ]
            },
            {
              "handle": [
                {
                  "distributed": {
                    "read_interval": "5s",
                    "write_interval": "5s"
                  },
                  "handler": "rate_limit",
                  "rate_limits": {
                    "aggregate-share": {
                      "key": "{http.regexp.aggregate-share.1}",
                      "max_events": 1500,
                      "window": "15s"
                    }
                  }
                },
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": ":8080"
                    }
                  ]
                }
              ],
              "match": [
                {
                  "method": [
                    "POST"
                  ],
                  "path_regexp": {
                    "name": "aggregate-share",
                    "pattern": "/tasks/([0-9A-Za-z_-]{43})/aggregate_shares"
                  }
                }
              ]
            },
            {
              "handle": [
                {
                  "handler": "vars",
                  "skip_log": true
                },
                {
                  "handler": "static_response",
                  "status_code": 200
                }
              ],
              "match": [
                {
                  "method": [
                    "GET"
                  ],
                  "path": [
                    "/healthz"
                  ]
                }
              ]
            },
            {
              "handle": [
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": ":8080"
                    }
                  ]
                }
              ],
              "match": [
                {
                  "path": [
                    "/*"
                  ]
                }
              ]
            }
          ]
        }
      }
    }
  },
  "logging": {
    "logs": {
      "default": {
        "encoder": {
          "format": "json",
          "level_format": "upper",
          "time_format": "rfc3339_nano",
          "time_key": "timestamp"
        },
        "include": [
          "http.handlers.rate_limit",
          "http.log.access"
        ],
        "level": "info",
        "sampling": {
          "first": 10,
          "thereafter": 100
        },
        "writer": {
          "output": "stdout"
        }
      }
    }
  },
  "storage": {
    "host": [
      "[redacted]"
    ],
    "key_prefix": "[redacted]",
    "module": "redis",
    "port": [
      "6378"
    ],
    "tls_enabled": true,
    "tls_insecure": false,
    "tls_server_certs_pem": "[redacted]"
  }
}

Waiting for first release

Hi,

when do you plan to release the first version of caddy-ratelimit? Or do you not plan to introduce releases at all?

Regards

It really just doesn't work with Caddy 2.6.4

To replicate the issue, I tried the below:

Building my Caddy docker image with the below config:

Dockerfile:

FROM caddy:2.6.4-alpine AS builder

RUN apk add --no-cache git go && \
    GO111MODULE=on go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest && \
    /root/go/bin/xcaddy build \
        --with github.com/mholt/caddy-ratelimit

FROM caddy:2.6.4-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Using this Caddyfile config:

{
	order rate_limit before basicauth
}
http://example.com {
    root * /usr/share/caddy
    rate_limit {
        zone mydynamiczone {
            key    {remote_host}
            events 10
            window 1m
        }
        distributed {
            write_interval 5s
            read_interval 5s
        }
        sweep_interval 1m
    }
}

Error: Error during parsing: unrecognized directive: rate_limit

Am I missing anything?? Why is it still not recognized as a directive? Is the latest Caddy Images 2.6.4 supported?

Distributed Retry-After

I almost filed a bug report because after trying a few storage modules with distributed rate limits, the Retry-After was always being set to 0. I dug into the code and see that this is actually a static value.

Any ideas on how we can finish implementing Retry-After for distributed zones? I understand this probably isn't a very high priority feature for you, however, I'd be willing to work on it if there's some form of calculation you have in mind for the value.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.