Coder Social home page Coder Social logo

rpardini / docker-registry-proxy Goto Github PK

View Code? Open in Web Editor NEW
839.0 17.0 160.0 122 KB

An HTTPS Proxy for Docker providing centralized configuration and caching of any registry (quay.io, DockerHub, k8s.gcr.io)

License: Apache License 2.0

Shell 72.37% Dockerfile 27.63%
docker docker-registry docker-mirror kubernetes https-proxy ca-certificate

docker-registry-proxy's Introduction

GitHub Workflow Status GitHub tag (latest by date) GitHub Workflow Status Docker Image Size (latest semver) Docker Pulls

TL,DR

A caching proxy for Docker; allows centralised management of (multiple) registries and their authentication; caches images from any registry. Caches the potentially huge blob/layer requests (for bandwidth/time savings), and optionally caches manifest requests ("pulls") to avoid rate-limiting.

NEW: avoiding DockerHub Pull Rate Limits with Caching

Starting November 2nd, 2020, DockerHub will supposedly start rate-limiting pulls, also known as the Docker Apocalypse. The main symptom is Error response from daemon: toomanyrequests: Too Many Requests. Please see https://docs.docker.com/docker-hub/download-rate-limit/ during pulls. Many unknowing Kubernetes clusters will hit the limit, and struggle to configure imagePullSecrets and imagePullPolicy.

Since version 0.6.0, this proxy can be configured with the env var ENABLE_MANIFEST_CACHE=true which provides configurable caching of the manifest requests that DockerHub throttles. You can then fine-tune other parameters to your needs. Together with the possibility to centrally inject authentication (since 0.3x), this is probably one of the best ways to bring relief to your distressed cluster, while at the same time saving lots of bandwidth and time.

Note: enabling manifest caching, in its default config, effectively makes some tags immutable. Use with care. The configuration ENVs are explained in the Dockerfile, relevant parts included below.

# Manifest caching tiers. Disabled by default, to mimick 0.4/0.5 behaviour.
# Setting it to true enables the processing of the ENVs below.
# Once enabled, it is valid for all registries, not only DockerHub.
# The envs *_REGEX represent a regex fragment, check entrypoint.sh to understand how they're used (nginx ~ location, PCRE syntax).
ENV ENABLE_MANIFEST_CACHE="false"

# 'Primary' tier defaults to 10m cache for frequently used/abused tags.
# - People publishing to production via :latest (argh) will want to include that in the regex
# - Heavy pullers who are being ratelimited but don't mind getting outdated manifests should (also) increase the cache time here
ENV MANIFEST_CACHE_PRIMARY_REGEX="(stable|nightly|production|test)"
ENV MANIFEST_CACHE_PRIMARY_TIME="10m"

# 'Secondary' tier defaults any tag that has 3 digits or dots, in the hopes of matching most explicitly-versioned tags.
# It caches for 60d, which is also the cache time for the large binary blobs to which the manifests refer.
# That makes them effectively immutable. Make sure you're not affected; tighten this regex or widen the primary tier.
ENV MANIFEST_CACHE_SECONDARY_REGEX="(.*)(\d|\.)+(.*)(\d|\.)+(.*)(\d|\.)+"
ENV MANIFEST_CACHE_SECONDARY_TIME="60d"

# The default cache duration for manifests that don't match either the primary or secondary tiers above.
# In the default config, :latest and other frequently-used tags will get this value.
ENV MANIFEST_CACHE_DEFAULT_TIME="1h"

What?

Essentially, it's a man in the middle: an intercepting proxy based on nginx, to which all docker traffic is directed using the HTTPS_PROXY mechanism and injected CA root certificates.

The main feature is Docker layer/image caching, including layers served from S3, Google Storage, etc.

As a bonus it allows for centralized management of Docker registry credentials, which can in itself be the main feature, eg in Kubernetes environments.

You configure the Docker clients (err... Kubernetes Nodes?) once, and then all configuration is done on the proxy -- for this to work it requires inserting a root CA certificate into system trusted root certs.

master/:latest is unstable/beta

  • :latest and :latest-debug Docker tag is unstable, built from master, and amd64-only
  • Production/stable is 0.6.2, see 0.6.2 tag on Github - this image is multi-arch amd64/arm64
  • The previous version is 0.5.0, without any manifest caching, see 0.5.0 tag on Github - this image is multi-arch amd64/arm64

Also hosted on GitHub Container Registry (ghcr.io)

  • DockerHub image is at rpardini/docker-registry-proxy:<version>
  • GitHub image is at ghcr.io/rpardini/docker-registry-proxy:<version>
  • Since 0.5.x, they both carry the same images
  • This can be useful if you're already hitting DockerHub's rate limits and can't pull the proxy from DockerHub

Usage (running the Proxy server)

  • Run the proxy on a host close (network-wise: high bandwidth, same-VPC, etc) to the Docker clients
  • Expose port 3128 to the network
  • Map volume /docker_mirror_cache for up to CACHE_MAX_SIZE (32gb by default) of cached images across all cached registries
  • Map volume /ca, the proxy will store the CA certificate here across restarts. Important this is security sensitive.
  • Env ALLOW_PUSH : This bypasses the proxy when pushing, default to false - if kept to false, pushing will not work. For more info see this commit.
  • Env CACHE_MAX_SIZE (default 32g): set the max size to be used for caching local Docker image layers. Use Nginx sizes.
  • Env ENABLE_MANIFEST_CACHE, see the section on pull rate limiting.
  • Env REGISTRIES: space separated list of registries to cache; no need to include DockerHub, its already done internally.
  • Env AUTH_REGISTRIES: space separated list of hostname:username:password authentication info.
    • hostnames listed here should be listed in the REGISTRIES environment as well, so they can be intercepted.
  • Env AUTH_REGISTRIES_DELIMITER to change the separator between authentication info. By default, a space: " ". If you use keys that contain spaces (as with Google Cloud Registry), you should update this variable, e.g. setting it to AUTH_REGISTRIES_DELIMITER=";;;". In that case, AUTH_REGISTRIES could contain something like registry1.com:user1:pass1;;;registry2.com:user2:pass2.
  • Env AUTH_REGISTRY_DELIMITER to change the separator between authentication info parts. By default, a colon: ":". If you use keys that contain single colons, you should update this variable, e.g. setting it to AUTH_REGISTRIES_DELIMITER=":::". In that case, AUTH_REGISTRIES could contain something like registry1.com:::user1:::pass1 registry2.com:::user2:::pass2.
  • Env PROXY_REQUEST_BUFFERING: If push is allowed, buffering requests can cause issues on slow upstreams. If you have trouble pushing, set this to false first, then fix remainig timeouts. Default is true to not change default behavior. ENV PROXY_REQUEST_BUFFERING="true"
  • Timeouts ENVS - all of them can pe specified to control different timeouts, and if not set, the defaults will be the ones from Dockerfile. The directives will be added into http block.:

Simple (no auth, all cache)

docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       rpardini/docker-registry-proxy:0.6.2

DockerHub auth

For Docker Hub authentication:

  • hostname should be auth.docker.io
  • username should NOT be an email, use the regular username
docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.public.registry" \
       -e AUTH_REGISTRIES="auth.docker.io:dockerhub_username:dockerhub_password your.own.registry:username:password" \
       rpardini/docker-registry-proxy:0.6.2

Simple registries auth (HTTP Basic auth)

For regular registry auth (HTTP Basic), the hostname should be the registry itself... unless your registry uses a different auth server.

See the example above for DockerHub, adapt the your.own.registry parts (in both ENVs).

This should work for quay.io also, but I have no way to test.

GitLab auth

GitLab may use a different/separate domain to handle the authentication procedure.

Just like DockerHub uses auth.docker.io, GitLab uses its primary (git) domain for the authentication.

If you run GitLab on git.example.com and its registry on reg.example.com, you need to include both in REGISTRIES and use the primary domain for AUTH_REGISTRIES.

For GitLab.com itself the authentication domain should be gitlab.com.

docker run  --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="reg.example.com git.example.com" \
       -e AUTH_REGISTRIES="git.example.com:USER:PASSWORD" \
       rpardini/docker-registry-proxy:0.6.2

Google Container Registry (GCR) auth

For Google Container Registry (GCR), username should be _json_key and the password should be the contents of the service account JSON. Check out GCR docs.

The service account key is in JSON format, it contains spaces (" ") and colons (":").

To be able to use GCR you should set AUTH_REGISTRIES_DELIMITER to something different than space (e.g. AUTH_REGISTRIES_DELIMITER=";;;") and AUTH_REGISTRY_DELIMITER to something different than a single colon (e.g. AUTH_REGISTRY_DELIMITER=":::").

Example with GCR using credentials from a service account from a key file servicekey.json:

docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.public.registry" \
       -e AUTH_REGISTRIES_DELIMITER=";;;" \
       -e AUTH_REGISTRY_DELIMITER=":::" \
       -e AUTH_REGISTRIES="gcr.io:::_json_key:::$(cat servicekey.json);;;auth.docker.io:::dockerhub_username:::dockerhub_password" \
       rpardini/docker-registry-proxy:0.6.2

Kind Cluster

Kind is a tool for running local Kubernetes clusters using Docker container “nodes”.

Because cluster nodes are Docker containers, docker-registry-proxy needs to be in the same docker network.

Example joining the kind docker network and using hostname docker-registry-proxy as hostname :

docker run --rm --name docker_registry_proxy -it \
       --net kind --hostname docker-registry-proxy \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       rpardini/docker-registry-proxy:0.6.2

Now deploy your Kind cluster and then automatically configure the nodes with the following script :

#!/bin/sh
KIND_NAME=${1-kind}
SETUP_URL=http://docker-registry-proxy:3128/setup/systemd
pids=""
for NODE in $(kind get nodes --name "$KIND_NAME"); do
  docker exec "$NODE" sh -c "\
      curl $SETUP_URL \
      | sed s/docker\.service/containerd\.service/g \
      | sed '/Environment/ s/$/ \"NO_PROXY=127.0.0.0\/8,10.0.0.0\/8,172.16.0.0\/12,192.168.0.0\/16\"/' \
      | bash" & pids="$pids $!" # Configure every node in background
done
wait $pids # Wait for all configurations to end

K3D Cluster

K3d is similar to Kind but is based on k3s. In order to run with its registry you need to setup settings like shown below.

# docker-registry-proxy
docker run -d --name registry-proxy --restart=always \
-v /tmp/registry-proxy/mirror_cache:/docker_mirror_cache \
-v /tmp/registry-proxy/certs:/ca \
rpardini/docker-registry-proxy:0.6.4

export PROXY_HOST=registry-proxy
export PROXY_PORT=3128
export NOPROXY_LIST="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,.svc"

cat <<EOF > /etc/k3d-proxy-config.yaml
apiVersion: k3d.io/v1alpha3
kind: Simple
name: mycluster
servers: 1
agents: 0
options:
    k3d:
       wait: true
       timeout: "60s"
    kubeconfig:
       updateDefaultKubeconfig: true
       switchCurrentContext: true
env:
  - envVar: HTTP_PROXY=http://$PROXY_HOST:$PROXY_PORT
    nodeFilters:
      - all
  - envVar: HTTPS_PROXY=http://$PROXY_HOST:$PROXY_PORT
    nodeFilters:
      - all
  - envVar: NO_PROXY='$NOPROXY_LIST'
    nodeFilters:
      - all
volumes:
  - volume: $REGISTRY_DIR/docker_mirror_certs/ca.crt:/etc/ssl/certs/registry-proxy-ca.pem
    nodeFilters:
      - all
EOF

k3d cluster create --config /etc/k3d-proxy-config.yaml

Configuring the Docker clients using Docker Desktop for Mac

Separate instructions for Mac clients available in this dedicated Doc Desktop for Mac document.

Configuring the Docker clients / Kubernetes nodes / Linux clients

Let's say you setup the proxy on host 192.168.66.72, you can then curl http://192.168.66.72:3128/ca.crt and get the proxy CA certificate.

On each Docker host that is to use the cache:

  • Configure Docker proxy pointing to the caching server
  • Add the caching server CA certificate to the list of system trusted roots.
  • Restart dockerd

Do it all at once, tested on Ubuntu Xenial, Bionic, and Focal, all systemd based:

# Add environment vars pointing Docker to use the proxy
mkdir -p /etc/systemd/system/docker.service.d
cat << EOD > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.66.72:3128/"
Environment="HTTPS_PROXY=http://192.168.66.72:3128/"
EOD

### UBUNTU
# Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.72:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh
###

### CENTOS
# Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.72:3128/ca.crt > /etc/pki/ca-trust/source/anchors/docker_registry_proxy.crt
update-ca-trust
###

# Reload systemd
systemctl daemon-reload

# Restart dockerd
systemctl restart docker.service

Testing

Clear dockerd of everything not currently running: docker system prune -a -f beware

Then do, for example, docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4 and watch the logs on the caching proxy, it should list a lot of MISSes.

Then, clean again, and pull again. You should see HITs! Success.

Do the same for docker pull ubuntu and rejoice.

Test your own registry caching and authentication the same way; you don't need docker login, or .docker/config.json anymore.

Developing/Debugging

Since 0.4 there is a separate -debug version of the image, which includes nginx-debug, and (since 0.5.x) has a mitmproxy (actually mitmweb) inserted after the CONNECT proxy but before the caching logic, and a second mitmweb between the caching layer and DockerHub. This allows very in-depth debugging. Use sparingly, and definitely not in production.

docker run --rm --name docker_registry_proxy -it 
       -e DEBUG_NGINX=true -e DEBUG=true -e DEBUG_HUB=true -p 0.0.0.0:8081:8081 -p 0.0.0.0:8082:8082 \
       -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       rpardini/docker-registry-proxy:0.6.2-debug
  • DEBUG=true enables the mitmweb proxy between Docker clients and the caching layer, accessible on port 8081
  • DEBUG_HUB=true enables the mitmweb proxy between the caching layer and DockerHub, accessible on port 8082 (since 0.5.x)
  • DEBUG_NGINX=true enables nginx-debug and debug logging, which probably is too much. Seriously.

Gotchas

  • If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. beware
  • Repeat, this will make your private images very public if you're not careful.
  • Currently you cannot push images while using the proxy which is a shame. PRs welcome. SEE ALLOW_PUSH ENV FROM USAGE SECTION.
  • Setting this on Linux is relatively easy.
    • On Mac follow the instructions here.
    • On Windows follow the instructions here.

Why not use Docker's own registry, which has a mirror feature?

Yes, Docker offers Registry as a pull through cache, unfortunately it only covers the DockerHub case. It won't cache images from quay.io, k8s.gcr.io, gcr.io, or any such, including any private registries.

That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the Internet on every Node it runs on, with no reuse.

This is due to the way the Docker "client" implements --registry-mirror, it only ever contacts mirrors for images with no repository reference (eg, from DockerHub). When a repository is specified dockerd goes directly there, via HTTPS (and also via HTTP if included in a --insecure-registry list), thus completely ignoring the configured mirror.

Docker itself should provide this.

Yeah. Docker Inc should do it. So should NPM, Inc. Wonder why they don't. 😼

TODO:

  • Basic Docker-for-Mac set-up instructions
  • Basic Docker-for-Windows set-up instructions.
  • Test and make auth work with quay.io, unfortunately I don't have access to it (hint, hint, quay)
  • Hide the mitmproxy building code under a Docker build ARG.
  • "Developer Office" proxy scenario, where many developers on a fast LAN share a proxy for bandwidth and speed savings (already works for pulls, but messes up pushes, which developers tend to use a lot)

docker-registry-proxy's People

Contributors

akosdudas avatar cpuguy83 avatar eatwithforks avatar fgimenez avatar grebois avatar hishamanver avatar jeffb4 avatar jgiannuzzi avatar leorolland avatar mkowalski avatar naftulee avatar noseka1 avatar rpardini avatar ruudk avatar saada avatar saces avatar sezbulent avatar stanyago avatar tiangolo avatar tlex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-registry-proxy's Issues

Insecure registry support

Hi, just wondering if this can be used to proxy insecure registries. I have tried to set it up but I get
Error response from daemon: received unexpected HTTP status: 502 Bad Gateway

This is because its connecting to the proxy, which is trying to forward on 443 to the private registry (Docker first tries https- and goes to the proxy even though I don't have HTTPS_PROXY configured), then returns bad gateway because the upstream server doesnt have 443 enabled. (Docker would normally retry on http , but doesnt on a 502 bad gateway - at least I think thats what the issue is)

Certificates

This usefully rolls its own; but we have a 'real' certificate for our domain; is there a good way to wedge it in?

How to authenticate in gcr.io

More of a question than an issue, how do you authenticate in gcr? there is not exactly a user and a password.
Do you use an oauth token? If so, how do you pass it on to the proxy?

Authorization Problem with harbor registry

Hi there,
when pulling an image from my private harbor registry i get a
Error response from daemon: unauthorized: authentication required
on the cache client.

The docker-registry-proxy logs say the following:

[03/Sep/2019:17:18:48 +0000] "/v2/" 401 87 "HOST:MY_HARBOR_REGISTRY" "PROXY-HOST:MY_HARBOR_REGISTRY" "UPSTREAM:IP_OF_MY_HARBOR_REGISTRY:443"
[03/Sep/2019:17:18:48 +0000] "/service/token" 200 978 "HOST:MY_HARBOR_REGISTRY" "PROXY-HOST:MY_HARBOR_REGISTRY" "UPSTREAM:IP_OF_MY_HARBOR_REGISTRY:443"
[03/Sep/2019:17:18:48 +0000] "/v2/transmart/transmart/manifests/1.0.0" 401 162 "HOST:MY_HARBOR_REGISTRY" "PROXY-HOST:MY_HARBOR_REGISTRY" "UPSTREAM:IP_OF_MY_HARBOR_REGISTRY:443"

But the Authentication seems to be configured as the logs say:
Adding Auth for registry 'MY_HARBOR_REGISTRY' with user 'christian.knell'.

The same behaviour when trying to pull a public repository or even when doing
docker login MY_HARBOR_REGISTRY

Registry auth passthrough

Hi,

can you think of a way to pass the docker client auth data through the proxy cache ?

The idea is, not to maintain this central authorization datafile, and leave the pull secret stuff, up to kubernetes

Any idea would be helpful.

Thanx
Peter

Using upstream proxy

Hi,
It would be nice to be able to use an upstream proxy, this can be interesting, when someone has a company proxy towards the internet.

So something like this:

docker_host1 ---
                + --> docker-registry-proxy --> company_proxy --> internet
docker_host2 ---

proxy_cache_revalidate when downloading from jfrog artifactory

Hi,

while working with with the new nginx.conf ( with manifest cache), i observed an issue when accessing JFrogs artifactory repo.

When having the nginx parameter proxy_cache_revalidate set to on, i get a 404 error, when trying to access an object a second time ( = when it is expired). So i needed to set this parameter to "off"

Unfortunately I did not know the reason for this, yet. Any idea where to look at ?

Thanx
Peter

Memory leak in mitmproxy

Docker image is default with debug enabled which uses mitmproxy.

mitmproxy with current settings gobbles a lot of memory and after a period of time killed with OOM, since it is not recreated automatically proxy becomes down.

If debug will be the default, mitmproxy's option, stream_large_bodies could be used not to store response bodies which are over a certain limit. This reduces memory footprint.

I think it will be better to default to debug false.

docker push retry - EOF -- Randomly getting 401 errors

Hello,

I recently encountered a problem that occurs only randomly, when pushing images. Some layers are retrying, then suddenly ending with EOF. To note that these errors were not encountered in the past, when the proxy was not used.

Error example:

The push refers to repository [test-repo.com/test/test]
 ....
 af0103c38359: Pushed
 d0347c582a73: Pushed
 222021755514: Layer already exists
 67c624cf5c38: Pushed
 2371e8a98212: Layer already exists
 a27518e43e49: Layer already exists
 5f7bfa7e154c: Layer already exists
 910d7fd9e23e: Layer already exists
 ea5003ab0221: Pushed
 4230ff7f2288: Layer already exists
 2c719774c1e1: Layer already exists
 f94641f1fe1f: Layer already exists
 ec62f19bb3aa: Layer already exists
 8b2e1d3f2e5f: Pushed
 864db58a2641: Retrying in 5 seconds
 864db58a2641: Retrying in 4 seconds
 864db58a2641: Retrying in 3 seconds
 864db58a2641: Retrying in 2 seconds
 864db58a2641: Retrying in 1 second
 864db58a2641: Retrying in 10 seconds
 864db58a2641: Retrying in 9 seconds
 864db58a2641: Retrying in 8 seconds
 864db58a2641: Retrying in 7 seconds
 864db58a2641: Retrying in 6 seconds
 864db58a2641: Retrying in 5 seconds
 864db58a2641: Retrying in 4 seconds
 864db58a2641: Retrying in 3 seconds
 864db58a2641: Retrying in 2 seconds
 864db58a2641: Retrying in 1 second
 864db58a2641: Retrying in 15 seconds
 864db58a2641: Retrying in 14 seconds
 864db58a2641: Retrying in 13 seconds
 864db58a2641: Retrying in 12 seconds
 864db58a2641: Retrying in 11 seconds
 864db58a2641: Retrying in 10 seconds
 864db58a2641: Retrying in 9 seconds
 864db58a2641: Retrying in 8 seconds
 864db58a2641: Retrying in 7 seconds
 864db58a2641: Retrying in 6 seconds
 864db58a2641: Retrying in 5 seconds
 864db58a2641: Retrying in 4 seconds
 864db58a2641: Retrying in 3 seconds
 864db58a2641: Retrying in 2 seconds
 864db58a2641: Retrying in 1 second
 864db58a2641: Retrying in 20 seconds
 864db58a2641: Retrying in 19 seconds
 864db58a2641: Retrying in 18 seconds
 864db58a2641: Retrying in 17 seconds
 864db58a2641: Retrying in 16 seconds
 864db58a2641: Retrying in 15 seconds
 864db58a2641: Retrying in 14 seconds
 864db58a2641: Retrying in 13 seconds
 864db58a2641: Retrying in 12 seconds
 864db58a2641: Retrying in 11 seconds
 864db58a2641: Retrying in 10 seconds
 864db58a2641: Retrying in 9 seconds
 864db58a2641: Retrying in 8 seconds
 864db58a2641: Retrying in 7 seconds
 864db58a2641: Retrying in 6 seconds
 864db58a2641: Retrying in 5 seconds
 864db58a2641: Retrying in 4 seconds
 864db58a2641: Retrying in 3 seconds
 864db58a2641: Retrying in 2 seconds
 864db58a2641: Retrying in 1 second
 EOF

I also found that some other users had this issue, when running behind a proxy. I ran the registry proxy in debug mode, but didn't find any obvious warning or errors in the proxy logs, besides the fact that i sometimes see a 401 response at the following get request, which i do not know if they are related tot the pushing issue:

Request:

GET https://127.0.0.1:444/v2/ HTTP/1.1

Host: test-registry.test.com
User-Agent: docker/19.03.8 go/go1.12.17 git-commit/afacb8b7f0 kernel/4.15.0-20-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)
Accept-Encoding: gzip
Connection: close

Response:

HTTP/1.1 401 Unauthorized

Server: nginx/1.18.0
Date: Fri, 04 Dec 2020 09:41:49 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 87
Connection: close
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://git.test.com/jwt/auth",service="container_registry"
X-Content-Type-Options: nosniff

As info, i ran the debug with the following arguments:

docker run -d --restart always --name docker_registry_proxy-debug -it \
       -e DEBUG=true \
       -e DEBUG_HUB=true \
       -e DEBUG_NGINX=true \
       -p 0.0.0.0:8083:8081 \
       -p 0.0.0.0:8084:8082 \
       -p 0.0.0.0:3128:3128 \
       -e ENABLE_MANIFEST_CACHE=true \
       -e CACHE_MAX_SIZE=400g \
       -e ALLOW_PUSH=true \
       -e REGISTRIES="test-registry.test.com" \
       -e AUTH_REGISTRIES="git.test.com/:test_user:test_pass" \
       -e MANIFEST_CACHE_PRIMARY_TIME="60d" \
       -e MANIFEST_CACHE_SECONDARY_TIME="60d" \
       -e MANIFEST_CACHE_DEFAULT_TIME="30d" \
       -v /mnt/docker-mirror-cache:/docker_mirror_cache \
       -v /mnt/docker-mirror-cache/ca:/ca \
       rpardini/docker-registry-proxy:0.6.1-debug

At a first glance it seems that the authorization header is not sent, but i also wonder why are these random request happening towards /v2/.

I also looked through the whole nginx config, and added proxy_set_header X-Forwarded-Proto: https; in both server configs, but didn't seem to help.
I'm still investigating the issue, but meanwhile i opened it because there certainly seems to be something wrong.

Thanks.

Unable to pull images from quay.io

Does anyone use the proxy to access quay.io. As I understand the proxy does login, but seems something doesn't work, but cannot understand what.

Looks like the multiple client request cache locking is not working

Hello, I've tested pulling from multiple docker client machines the same image (2GB) at the same time through the registry proxy.
As stated in the nginx.conf it should do the "proxy_cache_lock on" feature, which suppose to download the blob/image from the origin only once, but instead it downloads it multiple times as per clients requests.
With only 1 client it took 1:43 minutes to download and distribute.
With 2 clients it took 3:05 minutes to download and distribute.
And with 4 clients it took 6:07 minutes to download and distribute.

Why do you think it works like that?

peer closed connection in SSL handshake while SSL handshaking to upstream

I'm facing this SSL handshake error when trying to pull a Kubernetes image:

$ docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4
Error response from daemon: received unexpected HTTP status: 502 Bad Gateway

And in this image's logs:

2019/08/09 12:44:59 [alert] 105#105: 1024 worker_connections are not enough
2019/08/09 12:44:59 [error] 106#106: *16512 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://74.125.140.82:443/v2/", host: "k8s.gcr.io"
- [09/Aug/2019:12:44:59 +0000] "/v2/" 502 173 "HOST:k8s.gcr.io" "PROXY-HOST:k8s.gcr.io" "UPSTREAM:74.125.140.82:443"
2019/08/09 12:45:00 [alert] 105#105: 1024 worker_connections are not enough
2019/08/09 12:45:00 [error] 106#106: *17538 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/kube-proxy-amd64/manifests/v1.10.4 HTTP/1.1", upstream: "https://74.125.140.82:443/v2/kube-proxy-amd64/manifests/v1.10.4", host: "k8s.gcr.io"
- [09/Aug/2019:12:45:00 +0000] "/v2/kube-proxy-amd64/manifests/v1.10.4" 502 173 "HOST:k8s.gcr.io" "PROXY-HOST:k8s.gcr.io" "UPSTREAM:74.125.140.82:443"

I'm using Docker Desktop 2.1.0.0, Channel edge which bundle Docker Engine 19.03.1, I created the container with the following command:

$ docker run --name docker-registry_proxy \
             --restart=always \
             --publish 3128:3128 \
             --volume /private/var/lib/docker-registry/cache:/docker_mirror_cache \
             --volume /private/var/lib/docker-registry/ca:/ca \
             --env REGISTRIES="k8s.gcr.io gcr.io quay.io" \
             rpardini/docker-registry-proxy:0.2.4

I'm not passing the AUTH_REGISTRIES env variable as I only want to cache image pulling, I don't want to pull private images.

To setup the proxy on the Mac I did the following:

  1. Download the ca.crt file:
$ curl http://127.0.0.1:3128/ca.crt > ~/Downloads/ca.crt
  1. I double clicked it, which opened the Keychain Access app and I imported it as a system certificate
  2. I then selected the certificate from the Keychain Access app and selected Always Trust from "When using this certificate:".
  3. I went to the Docker Desktop > Preferences > Resources > Proxies and I filled in both HTTP and HTTPS with http://127.0.0.1:3128/ and restarted Docker

Can you please help me in order to get this setup working?


Trying to grab some information using curl ...

Here is a curl to k8s.gcr.io without the proxy:

curl -vv https://k8s.gcr.io/
*   Trying 2a00:1450:400c:c09::52...
* TCP_NODELAY set
*   Trying 173.194.76.82...
* TCP_NODELAY set
* Connected to k8s.gcr.io (173.194.76.82) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.gcr.io
*  start date: Jul 29 17:26:57 2019 GMT
*  expire date: Oct 27 17:26:57 2019 GMT
*  subjectAltName: host "k8s.gcr.io" matched cert's "*.gcr.io"
*  issuer: C=US; O=Google Trust Services; CN=GTS CA 1O1
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7f86b1002600)
> GET / HTTP/2
> Host: k8s.gcr.io
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 302
< location: https://cloud.google.com/container-registry/
< content-length: 241
< date: Fri, 09 Aug 2019 13:21:11 GMT
< content-type: text/html; charset=UTF-8
< server: Docker Registry
< x-xss-protection: 0
< x-frame-options: SAMEORIGIN
< alt-svc: quic=":443"; ma=2592000; v="46,43,39"
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="https://cloud.google.com/container-registry/">here</A>.
</BODY></HTML>
* Connection #0 to host k8s.gcr.io left intact

And here with the proxy:

HTTPS_PROXY="http://192.168.178.26:3128/" curl -vv https://k8s.gcr.io/
*   Trying 192.168.178.26...
* TCP_NODELAY set
* Connected to 192.168.178.26 (192.168.178.26) port 3128 (#0)
* Establish HTTP proxy tunnel to k8s.gcr.io:443
> CONNECT k8s.gcr.io:443 HTTP/1.1
> Host: k8s.gcr.io:443
> User-Agent: curl/7.54.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection Established
< Proxy-agent: nginx
<
* Proxy replied OK to CONNECT request
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: C=NL; ST=Noord Holland; L=Amsterdam; O=ME; OU=IT; CN=DockerMirrorBox Web Cert d9add2a9f015 2019.08.09 13:09
*  start date: Aug  9 13:09:16 2019 GMT
*  expire date: Aug  8 13:09:16 2020 GMT
*  subjectAltName: host "k8s.gcr.io" matched cert's "k8s.gcr.io"
*  issuer: C=NL; ST=Noord Holland; L=Amsterdam; O=ME; OU=IT; CN=DockerMirrorBox Intermediate IA d9add2a9f015 2019.08.09 13:09
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: k8s.gcr.io
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.14.0
< Date: Fri, 09 Aug 2019 13:25:38 GMT
< Content-Type: text/html
< Content-Length: 173
< Connection: keep-alive
<
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.14.0</center>
</body>
</html>
* Connection #0 to host 192.168.178.26 left intact

modify nginx.conf to forward corporate proxy

Hello, first of all, thanks for making good image and solution.

Now I'm trying to apply this good solution in my office.

And we have corporate proxy such as http://x.x.x.x:8080 and have certificates for this host.

In this situation, how can I make the changes to work nginx properly?

I tried..

  1. update certificates

Add company certificates

RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
mkdir /usr/local/share/ca-certificates/
COPY corporate.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates

  1. change nginx conf

forward proxy for non-CONNECT request

location / {

add_header "Content-type" "text/plain" always;

proxy_pass http://x.x.x.x:8080;

return 200 "docker-registry-proxy: The docker caching proxy is working!";

}

but it doesn't work.

Please check when you have time. Thanks!

Allow docker-client to use its own authentication (docker login), if present, otherwise use configured registry auth

Hi,

i need to have both options: for auth.docker.io
a.) a paid team key
b. ) the option for pull secrets in K8s to give users the possibility to access their private repos.

Out of the Box, this did not work. So i changed

Add the authentication info, if the map matched the target domain.

    proxy_set_header Authorization $finalAuth;

to
if ( $http_authorization = "" ) {
set $myfinalAuth $finalAuth;
}
if ( $http_authorization != "" ) {
set $myfinalAuth $http_authorization;
}

    proxy_set_header Authorization $myfinalAuth;

Hope this helps others as well

Peter

Handle DockerHub's imminent rate limiting/throttling implementation

See https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/
This has the potential to be disastrous for eg large k8s clusters in which almost every node will try to pull the same image.
Docker says it will rate limit

  • manifest requests
  • by IP address

We already have everything in place to actually cache manifest requests. See https://github.com/rpardini/docker-registry-proxy/blob/master/nginx.conf#L288 -- they're cached for 1 second only, and turning this up would IMHO avoid the rate limits completely. Unfortunately it would also cache mutable tags (eg, :latest) which could drive people crazy.

So maybe

  • an ENV var to enable longer caching of manifest requests and completely avoid ratelimiting
  • an ENV var to exclude some manifest names from caching

Issues running with podman

First of all thanks for this great project, its a great solution, and has saved gigabytes already

I wish to run this container with podman so that I can uses it as a proxy for my local docker installation.

I ran into an issue if I run it as follows : (converting the docker run to podman run)

sudo podman run --rm --name docker_registry_proxy -it -p 0.0.0.0:3128:3128 -v /media/data/kube-dev-docker-registry-proxy/docker_mirror_cache:/docker_mirror_cache -v /media/data/kube-dev-docker-registry-proxy/docker_mirror_certs:/ca -e REGISTRIES="k8s.gcr.io gcr.io quay.io" -e AUTH_REGISTRIES="" rpardini/docker-registry-proxy:0.2.4
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: d66094a91f46 2020.03.09 22:42
INFO: Sign the IA request with the CA cert and key, producing the IA cert
INFO: Initialize the serial number for signed certificates
INFO: Create the key (w/o passphrase..)
INFO: Create the signing request, using extensions
INFO: Sign the request, using the intermediate cert and key
INFO: Concatenating fullchain.pem...
INFO: Concatenating fullchain_with_key.pem
Upstream SSL certificate verification enabled.
Testing nginx config...
2020/03/09 22:42:43 [emerg] 56#56: invalid port in resolver "2001:4860:4860::8888" in /etc/nginx/resolvers.conf:1
nginx: [emerg] invalid port in resolver "2001:4860:4860::8888" in /etc/nginx/resolvers.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed

The contents of the /etc/resolv.conf when running podman is

bash-4.4# cat /etc/resolv.conf 
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 2001:4860:4860::8888
nameserver 2001:4860:4860::8844

This differs from my host /etc/resolv.conf

cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 127.0.0.1

SO I guess podman is adding those IP6 addresses.

I managed to make podman work with adding --dns argument

sudo podman run --rm --dns 127.0.0.11 --name docker_registry_proxy -it -p 0.0.0.0:3128:3128 -v /media/data/kube-dev-docker-registry-proxy/docker_mirror_cache:/docker_mirror_cache -v /media/data/kube-dev-docker-registry-proxy/docker_mirror_certs:/ca -e REGISTRIES="k8s.gcr.io gcr.io quay.io" -e AUTH_REGISTRIES="" rpardini/docker-registry-proxy:0.2.4
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: 645af140aa11 2020.03.09 22:44
INFO: Sign the IA request with the CA cert and key, producing the IA cert
INFO: Initialize the serial number for signed certificates
INFO: Create the key (w/o passphrase..)
INFO: Create the signing request, using extensions
INFO: Sign the request, using the intermediate cert and key
INFO: Concatenating fullchain.pem...
INFO: Concatenating fullchain_with_key.pem
Upstream SSL certificate verification enabled.
Testing nginx config...
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Starting nginx! Have a nice day.

Would it be possible to make a change in the entrypoint to filter out the 'invalid' nameservers when generating the nginx resolves ?

quay.io private repository and auth

HIT [02/Nov/2018:18:50:16 +0000] "/v2/" 401 40 "HOST:quay.io" "PROXY-HOST:quay.io" "UPSTREAM:-"
HIT [02/Nov/2018:18:50:17 +0000] "/v2/auth" 200 1063 "HOST:quay.io" "PROXY-HOST:quay.io" "UPSTREAM:-"
MISS [02/Nov/2018:18:50:18 +0000] "/v2/<repository-redacted>/<image-name-redacted>/manifests/4.2.0-8" 401 40 "HOST:quay.io" "PROXY-HOST:quay.io" "UPSTREAM:50.16.237.72:443"

You'll see that /v2 is a HIT (401), /v2/auth is a HIT (200) (is it supposed to cache it?), but the image itself is a MISS (401).

Docker will say: Error response from daemon: unauthorized: authentication required

Authentication was configured:

Adding Auth for registry 'quay.io' with user '<redacted>'.

Pulling from public registries obviously works just fine, as does the caching.

Originally posted by @outworlder in #1 (comment)

Unable to auth with Docker Hub and private registries

Using registries with public images on (registry-1.docker.io, gcr.io, quay.io) everything is working great.

However setting up auth with Docker Hub or my own private hub results in with 401. Using docker login from the box where the HTTPS PROXY is defined also fails.

I wonder if it works for you and it is just me.

http-proxy.conf content

[Service]
Environment="HTTPS_PROXY=http://172.27.44.181:3128"

Command run:

docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="k8s.gcr.io gcr.io quay.io" \
       -e AUTH_REGISTRIES="registry-1.docker.io:gsengun:THE_PASS" \
       rpardini/docker-registry-proxy:latest

And here is the logs:

HIT [21/Jul/2018:00:25:14 +0000] "/v2/" 401 87 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"
HIT [21/Jul/2018:00:25:14 +0000] "/token" 200 4322 "HOST:auth.docker.io" "PROXY-HOST:auth.docker.io" "UPSTREAM:-"
MISS [21/Jul/2018:00:25:14 +0000] "/v2/gsengun/spring-demo-app/manifests/latest" 401 166 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:52.54.155.177:443"

I have tested with docker versions >= 17.03.0-ce

Question - Is this works in airgap environment

Is docker-registry-proxy works on airgap environment. Is connection to external network required even after caching the images.

I configured docker-registry-proxy with caching docker images when public interface is up.
Then, I bring down the public interface.

When requests come to docker-registry-proxy from clients, looks like it references external docker registry.
The docker pull on clients will fail.

In one of the docker url following is the description.
https://docs.docker.com/registry/recipes/mirror/

What if the content changes on the Hub?

When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content.

Can you please clarify.

What is next after docker run ?

Hello!
I try run docker-registry-proxy.

Run docker:

docker run --rm --name docker_registry_proxy -it -p 0.0.0.0:3128:3128 -v $(pwd)/docker_mirror_cache:/docker_mirror_cache -v $(pwd)/docker_mirror_certs:/ca -e REGISTRIES="k8s.gcr.io gcr.io quay.io" rpardini/docker-registry-proxy:0.2.4
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: 6305aa7a7ef7 2019.09.16 07:38
INFO: Sign the IA request with the CA cert and key, producing the IA cert

What is next after docker run ?

Port not listen

[root@docker-caching-proxy-multiple-private ~]# ss -tnlp | grep 3128
[root@docker-caching-proxy-multiple-private ~]# 

Container not run

[root@docker-caching-proxy-multiple-private ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Thanks!

[302 Found] Requests are cached with Cloudfront signed url

We discovered that Quay.io responses are cached with their Signed Cloudfront url's.

KEY: https://quay.io/v2/calico/node/blobs/sha256:3b1a5d98aae992ef8a6ea77490d003095ed5cb99a33153fcbd48d0ae9f802c8b
HTTP/1.1 302 FOUND
Server: nginx/1.12.1
Date: Thu, 02 Jan 2020 15:18:31 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1261
Connection: close
Location: https://d3uo42mtx6z2cr.cloudfront.net/sha256/3b/3b1a5d98aae992ef8a6ea77490d003095ed5cb99a33153fcbd48d0ae9f802c8b?Expires=1577978911&Signature=JqQxETYKFYbS1Dl4ysRQTW6BG5JHALCecY5gaN8XZAIgp99kMhpl~8U93EIhiSXGZYCqd0FAUjSE8DpafbABTCzeUp8y2Ixz9JVF-7OoTTcC3bRoaTw5ETdEM3bpE3NsQ2PQrHSgiOOZG2K2bsJMG8p4quX5DO7BadWe6Cr-qT6PJdpGDDhqdKDGrc9g~xn-pEBc7tO-45u1rWgqEbfFMuhGP4T40J3aUesyo1Byu2YKqHeZ4iQTAauSVCTCMIoCaHJ3VDawjyeMznamds7ZUj7gFF4E0k~yeQ~Q8kfXRfW~Vs4a2Ik9oOKhsiVi6ZuGUk4boHImryolCbicpzjPXA__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA
Accept-Ranges: bytes
Docker-Content-Digest: sha256:3b1a5d98aae992ef8a6ea77490d003095ed5cb99a33153fcbd48d0ae9f802c8b
Cache-Control: max-age=31536000
X-Frame-Options: DENY
Strict-Transport-Security: max-age=63072000; preload

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="https://d3uo42mtx6z2cr.cloudfront.net/sha256/3b/3b1a5d98aae992ef8a6ea77490d003095ed5cb99a33153fcbd48d0ae9f802c8b?Expires=1577978911&amp;Signature=JqQxETYKFYbS1Dl4ysRQTW6BG5JHALCecY5gaN8XZAIgp99kMhpl~8U93EIhiSXGZYCqd0FAUjSE8DpafbABTCzeUp8y2Ixz9JVF-7OoTTcC3bRoaTw5ETdEM3bpE3NsQ2PQrHSgiOOZG2K2bsJMG8p4quX5DO7BadWe6Cr-qT6PJdpGDDhqdKDGrc9g~xn-pEBc7tO-45u1rWgqEbfFMuhGP4T40J3aUesyo1Byu2YKqHeZ4iQTAauSVCTCMIoCaHJ3VDawjyeMznamds7ZUj7gFF4E0k~yeQ~Q8kfXRfW~Vs4a2Ik9oOKhsiVi6ZuGUk4boHImryolCbicpzjPXA__&amp;Key-Pair-Id=APKAJ67PQLWGCSP66DGA">https://d3uo42mtx6z2cr.cloudfront.net/sha256/3b/3b1a5d98aae992ef8a6ea77490d003095ed5cb99a33153fcbd48d0ae9f802c8b?Expires=1577978911&amp;Signature=JqQxETYKFYbS1Dl4ysRQTW6BG5JHALCecY5gaN8XZAIgp99kMhpl~xxx

Time out issue on any image

I'm running the registry proxy and seems like not reaching the remote registries URLs. I'm running this:

docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 \
       -p 8081:8081 \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="k8s.gcr.io gcr.io quay.io docker.io" \
       -e AUTH_REGISTRIES="auth.docker.io:userX:passX" \
       rpardini/docker-registry-proxy

And then, I'm getting the following error:

.
.
Starting nginx! Have a nice day.
Web server listening at http://0.0.0.0:8081/
Proxy server listening at http://*:443


127.0.0.1:43248: clientconnect
127.0.0.1:43248: Certificate verification error for 127.0.0.1: self signed certificate in certificate chain (errno: 19, depth: 2)
127.0.0.1:43248: Ignoring server verification error, continuing with connection


2018/12/07 17:25:08 [error] 85#85: *3 registry-1.docker.io could not be resolved (110: Operation timed out), client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", host: "registry-1.docker.io"
- [07/Dec/2018:17:25:08 +0000] "/v2/" 502 173 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"
127.0.0.1:43248: Error in HTTP connection: TcpDisconnect("(32, 'EPIPE')",)
127.0.0.1:43248: clientdisconnect

Any idea why it would be happening?

Access to multiple private registries on one IP does not work

Hi,

i need to configure your lovely project to access multiple registries living on one IP Adress.
Multiple means a lot in my usecase so best would be if the docker pull url could be interpreted:
Let me give you an example:

docker.local.foo.bar:50000/image1:latest
docker.local.foo.bar:50020/image1:latest
docker.local.foo.bar:50030/image1:latest
Authentication is however uniq ( at least for now) to the registry host ( docker.local.foo.bar)

Unfortunately my nginx is limited, so any hint is highly appreciated.

Thanx
Peter

run as nginx user instead of root, plus don't open port 80.

./entrypoint.sh

2020/05/01 22:41:20 [warn] 510#510: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
2020/05/01 22:41:20 [emerg] 510#510: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: configuration file /etc/nginx/nginx.conf test failed

@rpardini

Error while try access DNS

I'm getting error in output of docker log:

2019/06/27 12:00:40 [error] 105#105: recv() failed (111: Connection refused) while resolving, resolver: 10.254.1.193:53

When run a docker pull:

$ time docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4                                                                                                                            9:00:15
Error response from daemon: error unmarshalling content: invalid character 'd' looking for beginning of value
docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4  0.06s user 0.03s system 0% cpu 27.377 total

Did you have a tip for solve this issue?

Hostname not properly set :/

Somehow the nginx cache fails to set the hostname properly. This prohibits this cache to function in a reverse proxied docker registry scenario.

I will submit a PR with my fix.

Run `create_ca_cert.sh` as a single command

Hi, first of all, thank you for this!

I don't know much about certificates and I am not sure if what I'm gonna ask have sense.

Could be nice if docker-registry-proxy can be run as docker run docker-registry-proxy create-ca-cert restart all the necessary things and then start the proxy as usual.

What do you think? This is because I want to avoid curling the cert in a loop, stoping the service, refresh the certs and rerun the proxy again.

Thank you

proxy connection is very slow

Hi. Thank you for the project :)

I have issue with download speed. I have 1Gbps network on my node and download speed between proxy and client node is fine (tested with iperf). But when I try to download a container (e.g. docker pull ubuntu) it's very slow (less than 1Mbps)

I've tried docker-compose:

version: '3.8'
services:
  registry_proxy:
    image: rpardini/docker-registry-proxy:0.3.0
    restart: always
    ports:
      - 3128:3128
    environment:
      REGISTRIES: mcr.microsoft.com
      DEBUG: "true"
    volumes:
      - ./docker_mirror_cache:/docker_mirror_cache
      - ./docker_mirror_certs:/ca

and command:

docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 \
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
       -v $(pwd)/docker_mirror_certs:/ca \
       -e REGISTRIES="mcr.microsoft.com" \
       -e DEBUG="true" \
       rpardini/docker-registry-proxy:0.3.0

on both cases, I'm seeing these logs on the server:

HIT [05/May/2020:04:44:23 +0000] "/v2/library/ubuntu/blobs/sha256:d51af753c3d3a984351448ec0f85ddafc580680fd6dfce9f4b09fdb367ee1e3e" 200 28556247 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"
127.0.0.1:60242: clientconnect
HIT [05/May/2020:04:44:24 +0000] "/v2/library/ubuntu/blobs/sha256:fee5db0ff82f7aa5ace63497df4802bbadf8f2779ed3e1858605b791dc449425" 200 163 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"
127.0.0.1:60242: Certificate verification error for 127.0.0.1: self signed certificate in certificate chain (errno: 19, depth: 2)
127.0.0.1:60242: Ignoring server verification error, continuing with connection
127.0.0.1:60242: clientdisconnect

Use Docker Proxy without a certificate

Hi
I want to use this service mirroring the registry docker but without importing the certificate
So it's easy to download docker images kubernetes on Http
I need to cache all images .
is this possible?

update failed packages

hello, so far this registry proxy suits my needs with only 1 pitfall that I found a quick workaround for:
pull failed because a host wasn't included in the "docker run" command, but then I don't want to loose my downloaded stuff.

At the point of this failure I'm using the proxy to cache Ecliipse-Che images which are downloaded from many sources, first time I forget to include cloudflare to the list and I get "invalid URL signature" when a che workspace is being built from a factory devfile.

the workaround:
I stop the daemonized registry proxy I was running (using the -d command in addition to the one suggested in the readme), add the missing repo URL to my docker run command, use a different folder for the images cache (a previously created empty folder) and use the same ca folder as before.

empty cache folder will start getting filled with images after I do "docker pull some-registry.io/failing-package:latest" (from the client configured to use the registry proxy as images source)

once this process is done and the cache has all the downloaded new images, copy the contents from the new cache folder to the old cache folder.

finally fix permissions: chown -R messagebus:root /the-cache-folder

now stop the running registry-proxy again

run my docker-registry-proxy cache as always with -d to leave running as daemon and bring back the original cache folder with the new content copied over it.

now stuff works, I have my whole cache without needing to redownload it, and new content has been provisioned to it the right way.

cheers and thanks for your effort this is very helpful for slow bandwidths and big apps (like che)

gcr.io - problem with service account

Have you ever tested connecting to gcr.io private registry?
The content of service account contains many ":", and when you parse hostname, username and password you used "cut -d ":" -f x"

Maybe you have a way how to send the content of service account in this ENV? I don't have any idea :( I think that a better way will be sending password in base64.

However everything works great with gcr.io, but I must set docker.auth.map manually. Thank you for good stuff.

Add docker-compose example

Attempting to launch the rpardini/docker-registry-proxy from docker-compose. Creates the certs, and then throws and error and halts while testing /etc/nginx/nginx.conf.

Versions Tried

0.3.0
0.4.0
0.5.0
0.6.0
latest

docker-compose.yml

version: '3.7'

services:
  docker_registry_proxy:
    image: rpardini/docker-registry-proxy:0.6.0
    env_file:
      - ./secrets.env
    environment:
      - CACHE_MAX_SIZE=256g
      - ENABLE_MANIFEST_CACHE=true
    volumes:
      - /mnt/easystore/docker_mirror_cache:/docker_mirror_cache
      - ./docker-mirror-certs:/ca
    ports:
      - 3128:3128

secrets.env

# Used By Docker Hub Proxy
REGISTRIES="auth.docker.io"
AUTH_REGISTRIES="auth.docker.io:mmxca:********"

Generated Output

docker_registry_proxy_1  | Adding certificate for registry: docker.caching.proxy.internal
docker_registry_proxy_1  | Adding certificate for registry: registry-1.docker.io
docker_registry_proxy_1  | Adding certificate for registry: auth.docker.io
docker_registry_proxy_1  | Adding certificate for registry: auth.docker.io
docker_registry_proxy_1  | INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:auth.docker.io
docker_registry_proxy_1  | INFO: No CA was found. Generating one.
docker_registry_proxy_1  | INFO: *** Please *** make sure to mount /ca as a volume -- if not, everytime this container starts, it will regenerate the CA and nothing will work.
docker_registry_proxy_1  | Generating RSA private key, 4096 bit long modulus (2 primes)
docker_registry_proxy_1  | .................................................++++
docker_registry_proxy_1  | .....++++
docker_registry_proxy_1  | e is 65537 (0x010001)
docker_registry_proxy_1  | INFO: generate CA cert with key and self sign it: 4ff17d5c6e57 2020.12.01 18:06
docker_registry_proxy_1  | INFO: Generate IA key
docker_registry_proxy_1  | INFO: Create a signing request for the IA: 4ff17d5c6e57 2020.12.01 18:06
docker_registry_proxy_1  | INFO: Sign the IA request with the CA cert and key, producing the IA cert
docker_registry_proxy_1  | INFO: Initialize the serial number for signed certificates
docker_registry_proxy_1  | INFO: Create the key (w/o passphrase..)
docker_registry_proxy_1  | INFO: Create the signing request, using extensions
docker_registry_proxy_1  | INFO: Sign the request, using the intermediate cert and key
docker_registry_proxy_1  | INFO: Concatenating fullchain.pem...
docker_registry_proxy_1  | INFO: Concatenating fullchain_with_key.pem
docker_registry_proxy_1  | Adding Auth for registry '"auth.docker.io' with user 'mmxca'.
docker_registry_proxy_1  | Upstream SSL certificate verification enabled.
docker_registry_proxy_1  | Testing nginx config...
docker_registry_proxy_1  | 2020/12/01 18:06:29 [emerg] 57#57: unexpected "a" in /etc/nginx/docker.auth.map:1
docker_registry_proxy_1  | nginx: [emerg] unexpected "a" in /etc/nginx/docker.auth.map:1
docker_registry_proxy_1  | nginx: configuration file /etc/nginx/nginx.conf test failed

Let me know if there is anything I can do to help troubleshoot

do not make any upstream request when asking for a cached image by sha.

when upstream registry is down, we want to be available and also want to avoid making these requests. is it possible to return cached replies without upstream requests?

Reproduce

  1. on a node pulling from the proxy server.
docker rmi alpine@sha256:b276d875eeed9c7d3f1cfa7edb06b22ed22b14219a7d67c52c56612330348239; docker run --rm alpine@sha256:b276d875eeed9c7d3f1cfa7edb06b22ed22b14219a7d67c52c56612330348239 ls
  1. on the proxy, tail the registry cache container logs
- [17/Apr/2020:21:50:36 +0000] "/v2/" 401 87 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:3.210.179.11:443"
- [17/Apr/2020:21:50:37 +0000] "/token" 200 4197 "HOST:auth.docker.io" "PROXY-HOST:auth.docker.io" "UPSTREAM:52.206.192.146:443"
- [17/Apr/2020:21:50:37 +0000] "/v2/library/alpine/manifests/sha256:b276d875eeed9c7d3f1cfa7edb06b22ed22b14219a7d67c52c56612330348239" 200 1638 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:3.209.173.81:443"
- [17/Apr/2020:21:50:37 +0000] "/v2/library/alpine/manifests/sha256:cb8a924afdf0229ef7515d9e5b3024e23b3eb03ddbba287f4a19c6ac90b8d221" 200 528 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:3.224.11.4:443"
HIT [17/Apr/2020:21:50:38 +0000] "/v2/library/alpine/blobs/sha256:a187dde48cd289ac374ad8539930628314bc581a481cdb41409c9289419ddb72" 200 1509 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"
HIT [17/Apr/2020:21:50:38 +0000] "/v2/library/alpine/blobs/sha256:aad63a9339440e7c3e1fff2b988991b9bfb81280042fa7f39a5e327023056819" 200 2803255 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:-"

Handling requests for registry

Hi,

I would like to know about how you handle the registry requests in your docker container, for example, i just client-side just need to configure the below, could you please explain me more

cat  /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.66.72:3128/"
Environment="HTTPS_PROXY=http://192.168.66.72:3128/"
EOD

@rpardini

SSL Handshake Upstream Issue

So, this appears to be the same issue as #18

I've tried latest, 0.3.0-beta2 and 0.3.0-beta1, and I see the same issue each time.

Logs:

➜ docker run --rm --name docker_registry_proxy -it \
      -p 0.0.0.0:3128:3128 \
      -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
      -v $(pwd)/docker_mirror_certs:/ca \
      rpardini/docker-registry-proxy:latest
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: f64345f700bd 2020.09.09 21:32
INFO: Sign the IA request with the CA cert and key, producing the IA cert
INFO: Initialize the serial number for signed certificates
INFO: Create the key (w/o passphrase..)
INFO: Create the signing request, using extensions
INFO: Sign the request, using the intermediate cert and key
INFO: Concatenating fullchain.pem...
INFO: Concatenating fullchain_with_key.pem
Adding Auth for registry 'some.authenticated.registry' with user 'oneuser'.
Adding Auth for registry 'another.registry' with user 'user'.
Upstream SSL certificate verification enabled.
Testing nginx config...
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Starting nginx! Have a nice day.
2020/09/10 18:02:50 [alert] 67#67: *1128 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 52.72.232.213:443 HTTP/1.1", host: "52.72.232.213:443"
2020/09/10 18:02:51 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.72.232.213:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:51 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.72.232.213:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:52 [alert] 67#67: *2218 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 3.218.162.19:443 HTTP/1.1", host: "3.218.162.19:443"
2020/09/10 18:02:52 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://3.218.162.19:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:52 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://3.218.162.19:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:54 [alert] 67#67: *3326 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 34.238.187.50:443 HTTP/1.1", host: "34.238.187.50:443"
2020/09/10 18:02:54 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://34.238.187.50:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:54 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://34.238.187.50:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:56 [alert] 67#67: *4488 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 52.1.121.53:443 HTTP/1.1", host: "52.1.121.53:443"
2020/09/10 18:02:57 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.1.121.53:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:57 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.1.121.53:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:58 [alert] 67#67: *5574 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 35.174.73.84:443 HTTP/1.1", host: "35.174.73.84:443"
2020/09/10 18:02:59 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://35.174.73.84:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:02:59 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://35.174.73.84:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:00 [alert] 67#67: *6704 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 52.20.56.50:443 HTTP/1.1", host: "52.20.56.50:443"
2020/09/10 18:03:01 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.20.56.50:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:01 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://52.20.56.50:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:03 [alert] 67#67: *7786 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 34.195.246.183:443 HTTP/1.1", host: "34.195.246.183:443"
2020/09/10 18:03:03 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://34.195.246.183:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:03 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://34.195.246.183:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:06 [alert] 67#67: *8914 1024 worker_connections are not enough while connecting to upstream, client: 172.17.0.1, server: _, request: "CONNECT 54.236.131.166:443 HTTP/1.1", host: "54.236.131.166:443"
2020/09/10 18:03:06 [error] 67#67: *3 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://54.236.131.166:443/v2/", host: "registry-1.docker.io"
2020/09/10 18:03:06 [warn] 67#67: *3 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: _, request: "GET /v2/ HTTP/1.1", upstream: "https://54.236.131.166:443/v2/", host: "registry-1.docker.io"
- [10/Sep/2020:18:03:06 +0000] "/v2/" 502 157 "HOST:registry-1.docker.io" "PROXY-HOST:registry-1.docker.io" "UPSTREAM:52.72.232.213:443, 3.218.162.19:443, 34.238.187.50:443, 52.1.121.53:443, 35.174.73.84:443, 52.20.56.50:443, 34.195.246.183:443, 54.236.131.166:443"

Output from docker pull:

➜ docker pull alpine
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Further details:

I'm on a mac, using docker for mac. I've configured the proxy settings in docker to route through the localhost:3128 proxy.

If there's any other information I can help with, please let me know. From what I can tell digging around the internet, it seems it might be an nginx config issue?

how can we use multiple gcr.io service accounts

Hi.
I need to use this with different gcr.io service accounts/project_id's ... I have successfully used a single private gcr.io repo with _json_key for user and the contects of the service account json.
This is how I have the user/password configured. I changed some private info

gcr.io:::_json_key:::{
"type": "service_account",
"project_id": "NTEST",
"private_key_id": "NTEST",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCbF7ebG7fHIoqQ\nP37H3MIn6HMIBpyWMl3\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "NTEST",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/moc-NTEST%40NTEST.iam.gserviceaccount.com"
}

When I added a second gcr.io project_id I received a duplicate auth repo error from the nginx.

I tried to add a second repo with the ;;; and that delimiter is working , but then I get a conflicting parameter error . This example is below. Im trying to figure out the correct way to add these... :)

Adding Auth for registry 'gcr.io' with user '_json_key'.
Adding Auth for registry 'gcr.io' with user '_json_key'.
Upstream SSL certificate verification enabled.
Testing nginx config...
2020/05/10 16:05:25 [emerg] 64#64: conflicting parameter "gcr.io" in /etc/nginx/docker.auth.map:2
nginx: [emerg] conflicting parameter "gcr.io" in /etc/nginx/docker.auth.map:2
nginx: configuration file /etc/nginx/nginx.conf test failed

Is there a better more correct way to format this for the different project_id's ?

gcr.io:::_json_key:::{
"type": "service_account",
"project_id": "NTEST",
"private_key_id": "378dbac43a397754dde3fd61f61f4d7d21cdf128",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCbF7ebG7fHIoqQ\nP37H3MIn6HMIBpyWMl3\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "NTEST",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/moc-NTEST%40NTEST.iam.gserviceaccount.com"
};;;****gcr.io:::_json_key:::{
"type": "service_account",
"project_id": "BTEST",
"private_key_id": "BTEST",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC8DR6q8UgEafb7\n3vQ32EReu+lBPnGEpnyr6z1590PQ=\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "BTEST",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/mc-BTEST%40mc-BTEST.iam.gserviceaccount.com"
}

Proxy not caching layers

So I ran this:

❯ docker run --rm --name docker_registry_proxy -it -e DEBUG_NGINX=true -e DEBUG=true -e DEBUG_HUB=true -e ENABLE_MANIFEST_CACHE=true -p 0.0.0.0:8081:8081 -p 0.0.0.0:8082:8082 -p 0.0.0.0:3128:3128 -v $(pwd)/docker_mirror_cache:/docker_mirror_cache  -v $(pwd)/docker_mirror_certs:/ca  rpardini/docker-registry-proxy:0.6.0-debug
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: 703dc421b8eb 2020.11.06 18:54
INFO: Sign the IA request with the CA cert and key, producing the IA cert
INFO: Initialize the serial number for signed certificates
INFO: Create the key (w/o passphrase..)
INFO: Create the signing request, using extensions
INFO: Sign the request, using the intermediate cert and key
INFO: Concatenating fullchain.pem...
INFO: Concatenating fullchain_with_key.pem
Adding Auth for registry 'some.authenticated.registry' with user 'oneuser'.
Adding Auth for registry 'another.registry' with user 'user'.
Manifest caching config: ---
    # First tier caching of manifests; configure via MANIFEST_CACHE_PRIMARY_REGEX and MANIFEST_CACHE_PRIMARY_TIME
    location ~ ^/v2/(.*)/manifests/(stable|nightly|production|test) {
        set $docker_proxy_request_type "manifest-primary";
        proxy_cache_valid 10m;
        include "/etc/nginx/nginx.manifest.stale.conf";
    }
    # Secondary tier caching of manifests; configure via MANIFEST_CACHE_SECONDARY_REGEX and MANIFEST_CACHE_SECONDARY_TIME
    location ~ ^/v2/(.*)/manifests/(.*)(\d|\.)+(.*)(\d|\.)+(.*)(\d|\.)+ {
        set $docker_proxy_request_type "manifest-secondary";
        proxy_cache_valid 60d;
        include "/etc/nginx/nginx.manifest.stale.conf";
    }
    # Default tier caching for manifests. Caches for 1h (from MANIFEST_CACHE_DEFAULT_TIME)
    location ~ ^/v2/(.*)/manifests/ {
        set $docker_proxy_request_type "manifest-default";
        proxy_cache_valid 1h;
        include "/etc/nginx/nginx.manifest.stale.conf";
    }
---
Starting in DEBUG MODE (mitmproxy).
Run mitmproxy with reverse pointing to the same certs...
Access mitmweb via http://127.0.0.1:8081/
Debugging outgoing DockerHub connections via mitmproxy on 8082.
Warning, DockerHub outgoing debugging disables upstream SSL verification for all upstreams.
Access mitmweb for outgoing DockerHub requests via http://127.0.0.1:8082/
Starting in DEBUG MODE (nginx).
Upstream SSL certificate verification is DISABLED.
Testing nginx config...
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
2020/11/06 18:54:33 [debug] 71#71: bind() 0.0.0.0:3128 #6
2020/11/06 18:54:33 [debug] 71#71: bind() 0.0.0.0:80 #7
2020/11/06 18:54:33 [debug] 71#71: bind() 0.0.0.0:444 #8
2020/11/06 18:54:33 [debug] 71#71: counter: 00007F622D637080, 1
nginx: configuration file /etc/nginx/nginx.conf test is successful
Starting nginx! Have a nice day.
2020/11/06 18:54:33 [debug] 72#72: bind() 0.0.0.0:3128 #6
2020/11/06 18:54:33 [debug] 72#72: bind() 0.0.0.0:80 #7
2020/11/06 18:54:33 [debug] 72#72: bind() 0.0.0.0:444 #8
2020/11/06 18:54:33 [notice] 72#72: using the "epoll" event method
2020/11/06 18:54:33 [debug] 72#72: counter: 00007FC65B088080, 1
2020/11/06 18:54:33 [notice] 72#72: nginx/1.18.0
2020/11/06 18:54:33 [notice] 72#72: built by gcc 9.3.0 (Alpine 9.3.0)
2020/11/06 18:54:33 [notice] 72#72: OS: Linux 5.4.39-linuxkit
2020/11/06 18:54:33 [notice] 72#72: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/11/06 18:54:33 [debug] 72#72: write: 9, 00007FFD0EC223C0, 3, 0
2020/11/06 18:54:33 [debug] 72#72: setproctitle: "nginx: master process /usr/sbin/nginx-debug -g daemon off;"
2020/11/06 18:54:33 [notice] 72#72: start worker processes
2020/11/06 18:54:33 [debug] 72#72: channel 3:9
2020/11/06 18:54:33 [notice] 72#72: start worker process 73
2020/11/06 18:54:33 [debug] 72#72: channel 10:11
2020/11/06 18:54:33 [debug] 73#73: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 73#73: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [notice] 72#72: start worker process 74
2020/11/06 18:54:33 [debug] 72#72: pass channel s:1 pid:74 fd:10 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 72#72: channel 12:13
2020/11/06 18:54:33 [debug] 73#73: notify eventfd: 11
2020/11/06 18:54:33 [notice] 72#72: start worker process 75
2020/11/06 18:54:33 [debug] 73#73: eventfd: 12
2020/11/06 18:54:33 [debug] 72#72: pass channel s:2 pid:75 fd:12 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 74#74: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 72#72: pass channel s:2 pid:75 fd:12 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 74#74: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 72#72: channel 14:15
2020/11/06 18:54:33 [debug] 73#73: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 73#73: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 73#73: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [notice] 72#72: start worker process 76
2020/11/06 18:54:33 [debug] 73#73: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 72#72: pass channel s:3 pid:76 fd:14 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 72#72: pass channel s:3 pid:76 fd:14 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 72#72: pass channel s:3 pid:76 fd:14 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 72#72: channel 16:17
2020/11/06 18:54:33 [debug] 74#74: notify eventfd: 13
2020/11/06 18:54:33 [debug] 73#73: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 76#76: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 74#74: eventfd: 14
2020/11/06 18:54:33 [debug] 76#76: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 74#74: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 74#74: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [notice] 72#72: start worker process 77
2020/11/06 18:54:33 [debug] 72#72: pass channel s:4 pid:77 fd:16 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 72#72: pass channel s:4 pid:77 fd:16 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 72#72: pass channel s:4 pid:77 fd:16 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 72#72: pass channel s:4 pid:77 fd:16 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 72#72: channel 18:19
2020/11/06 18:54:33 [debug] 73#73: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 73#73: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 73#73: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 74#74: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 76#76: notify eventfd: 17
2020/11/06 18:54:33 [debug] 73#73: epoll add event: fd:9 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 73#73: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 74#74: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 76#76: eventfd: 18
2020/11/06 18:54:33 [notice] 72#72: start worker process 78
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 72#72: pass channel s:5 pid:78 fd:18 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 73#73: channel handler
2020/11/06 18:54:33 [debug] 72#72: pass channel s:5 pid:78 fd:18 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 72#72: pass channel s:5 pid:78 fd:18 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 76#76: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 72#72: pass channel s:5 pid:78 fd:18 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:1 pid:74 fd:3
2020/11/06 18:54:33 [debug] 72#72: pass channel s:5 pid:78 fd:18 to s:4 pid:77 fd:16
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:2 pid:75 fd:13
2020/11/06 18:54:33 [debug] 72#72: channel 20:21
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 76#76: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 74#74: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:4 pid:77 fd:15
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 76#76: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [notice] 72#72: start worker process 79
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:5 pid:78 fd:16
2020/11/06 18:54:33 [debug] 73#73: channel: -2
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 73#73: timer delta: 11
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 76#76: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 73#73: channel handler
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:4 pid:77 fd:16
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 72#72: pass channel s:6 pid:79 fd:20 to s:5 pid:78 fd:18
2020/11/06 18:54:33 [debug] 73#73: get channel s:6 pid:79 fd:17
2020/11/06 18:54:33 [debug] 73#73: channel: -2
2020/11/06 18:54:33 [debug] 73#73: timer delta: 1
2020/11/06 18:54:33 [debug] 72#72: channel 22:23
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 78#78: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 78#78: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 74#74: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [notice] 72#72: start worker process 80
2020/11/06 18:54:33 [debug] 74#74: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 76#76: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 74#74: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 73#73: channel handler
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:7 pid:80 fd:18
2020/11/06 18:54:33 [debug] 73#73: channel: -2
2020/11/06 18:54:33 [debug] 74#74: epoll add event: fd:11 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 73#73: timer delta: 0
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:4 pid:77 fd:16
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:5 pid:78 fd:18
2020/11/06 18:54:33 [debug] 74#74: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 78#78: notify eventfd: 21
2020/11/06 18:54:33 [debug] 74#74: worker cycle
2020/11/06 18:54:33 [debug] 72#72: pass channel s:7 pid:80 fd:22 to s:6 pid:79 fd:20
2020/11/06 18:54:33 [debug] 74#74: epoll timer: -1
2020/11/06 18:54:33 [debug] 74#74: epoll: fd:11 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 78#78: eventfd: 22
2020/11/06 18:54:33 [debug] 72#72: channel 24:25
2020/11/06 18:54:33 [debug] 74#74: channel handler
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:2 pid:75 fd:9
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:3 pid:76 fd:10
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [notice] 72#72: start cache manager process 81
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:4 pid:77 fd:15
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 77#77: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 78#78: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 76#76: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 76#76: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 76#76: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 76#76: epoll add event: fd:15 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 73#73: channel handler
2020/11/06 18:54:33 [debug] 76#76: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 76#76: worker cycle
2020/11/06 18:54:33 [debug] 76#76: epoll timer: -1
2020/11/06 18:54:33 [debug] 79#79: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 76#76: epoll: fd:15 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 79#79: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:4 pid:77 fd:16
2020/11/06 18:54:33 [debug] 78#78: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 81#81: close listening 0.0.0.0:3128 #6
2020/11/06 18:54:33 [debug] 81#81: close listening 0.0.0.0:80 #7
2020/11/06 18:54:33 [debug] 81#81: close listening 0.0.0.0:444 #8
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 81#81: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 74#74: get channel s:5 pid:78 fd:16
2020/11/06 18:54:33 [debug] 78#78: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:6 pid:79 fd:17
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 78#78: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:7 pid:80 fd:18
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 81#81: notify eventfd: 7
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: get channel s:8 pid:81 fd:19
2020/11/06 18:54:33 [debug] 81#81: eventfd: 8
2020/11/06 18:54:33 [debug] 78#78: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 74#74: channel: -2
2020/11/06 18:54:33 [debug] 74#74: timer delta: 13
2020/11/06 18:54:33 [debug] 74#74: worker cycle
2020/11/06 18:54:33 [debug] 74#74: epoll timer: -1
2020/11/06 18:54:33 [debug] 80#80: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 81#81: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 73#73: get channel s:8 pid:81 fd:19
2020/11/06 18:54:33 [debug] 73#73: channel: -2
2020/11/06 18:54:33 [debug] 73#73: timer delta: 1
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 80#80: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 76#76: channel handler
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:5 pid:78 fd:18
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:6 pid:79 fd:20
2020/11/06 18:54:33 [debug] 77#77: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 79#79: notify eventfd: 23
2020/11/06 18:54:33 [debug] 79#79: eventfd: 24
2020/11/06 18:54:33 [debug] 79#79: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 79#79: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 79#79: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 79#79: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 79#79: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 80#80: notify eventfd: 25
2020/11/06 18:54:33 [debug] 77#77: notify eventfd: 19
2020/11/06 18:54:33 [debug] 75#75: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 80#80: eventfd: 26
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14D60DE0:118784
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14D7DE00:49152
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14D89E20:49152
2020/11/06 18:54:33 [debug] 72#72: pass channel s:8 pid:81 fd:24 to s:7 pid:80 fd:22
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 80#80: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 76#76: get channel s:4 pid:77 fd:9
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 72#72: channel 26:27
2020/11/06 18:54:33 [debug] 79#79: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 80#80: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 79#79: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 79#79: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 79#79: epoll add event: fd:21 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 79#79: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 79#79: worker cycle
2020/11/06 18:54:33 [debug] 79#79: epoll timer: -1
2020/11/06 18:54:33 [debug] 79#79: epoll: fd:21 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 79#79: channel handler
2020/11/06 18:54:33 [notice] 72#72: start cache loader process 82
2020/11/06 18:54:33 [debug] 79#79: channel: 32
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:0 pid:73 fd:3
2020/11/06 18:54:33 [debug] 79#79: channel command: 1
2020/11/06 18:54:33 [debug] 79#79: get channel s:7 pid:80 fd:9
2020/11/06 18:54:33 [debug] 80#80: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 79#79: channel: 32
2020/11/06 18:54:33 [debug] 79#79: channel command: 1
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:1 pid:74 fd:10
2020/11/06 18:54:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 73#73: channel handler
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:2 pid:75 fd:12
2020/11/06 18:54:33 [debug] 73#73: channel: 32
2020/11/06 18:54:33 [debug] 73#73: channel command: 1
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:3 pid:76 fd:14
2020/11/06 18:54:33 [debug] 73#73: get channel s:9 pid:82 fd:20
2020/11/06 18:54:33 [debug] 74#74: epoll: fd:11 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 73#73: channel: -2
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:4 pid:77 fd:16
2020/11/06 18:54:33 [debug] 74#74: channel handler
2020/11/06 18:54:33 [debug] 73#73: timer delta: 3
2020/11/06 18:54:33 [debug] 73#73: worker cycle
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:5 pid:78 fd:18
2020/11/06 18:54:33 [debug] 74#74: channel: 32
2020/11/06 18:54:33 [debug] 75#75: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 74#74: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: eventfd: 20
2020/11/06 18:54:33 [debug] 75#75: notify eventfd: 15
2020/11/06 18:54:33 [debug] 75#75: eventfd: 16
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 76#76: get channel s:5 pid:78 fd:11
2020/11/06 18:54:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:54:33 [debug] 77#77: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:6 pid:79 fd:20
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:7 pid:80 fd:22
2020/11/06 18:54:33 [debug] 77#77: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 72#72: pass channel s:9 pid:82 fd:26 to s:8 pid:81 fd:24
2020/11/06 18:54:33 [debug] 78#78: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 77#77: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 78#78: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 72#72: sigsuspend
2020/11/06 18:54:33 [debug] 78#78: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 77#77: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 78#78: epoll add event: fd:19 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 78#78: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 78#78: worker cycle
2020/11/06 18:54:33 [debug] 78#78: epoll timer: -1
2020/11/06 18:54:33 [debug] 78#78: epoll: fd:19 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 78#78: channel handler
2020/11/06 18:54:33 [debug] 78#78: channel: 32
2020/11/06 18:54:33 [debug] 78#78: channel command: 1
2020/11/06 18:54:33 [debug] 78#78: get channel s:6 pid:79 fd:9
2020/11/06 18:54:33 [debug] 78#78: channel: 32
2020/11/06 18:54:33 [debug] 78#78: channel command: 1
2020/11/06 18:54:33 [debug] 78#78: get channel s:7 pid:80 fd:11
2020/11/06 18:54:33 [debug] 78#78: channel: 32
2020/11/06 18:54:33 [debug] 78#78: channel command: 1
2020/11/06 18:54:33 [debug] 78#78: get channel s:8 pid:81 fd:13
2020/11/06 18:54:33 [debug] 78#78: channel: 32
2020/11/06 18:54:33 [debug] 78#78: channel command: 1
2020/11/06 18:54:33 [debug] 78#78: get channel s:9 pid:82 fd:15
2020/11/06 18:54:33 [debug] 78#78: channel: -2
2020/11/06 18:54:33 [debug] 78#78: timer delta: 17
2020/11/06 18:54:33 [debug] 78#78: worker cycle
2020/11/06 18:54:33 [debug] 79#79: get channel s:8 pid:81 fd:11
2020/11/06 18:54:33 [debug] 78#78: epoll timer: -1
2020/11/06 18:54:33 [debug] 77#77: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 82#82: close listening 0.0.0.0:3128 #6
2020/11/06 18:54:33 [debug] 81#81: epoll add event: fd:25 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 81#81: setproctitle: "nginx: cache manager process"
2020/11/06 18:54:33 [debug] 82#82: close listening 0.0.0.0:80 #7
2020/11/06 18:54:33 [debug] 82#82: close listening 0.0.0.0:444 #8
2020/11/06 18:54:33 [debug] 75#75: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 82#82: add cleanup: 0000562A14D54FD0
2020/11/06 18:54:33 [debug] 75#75: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 82#82: malloc: 0000562A14D4EB80:8
2020/11/06 18:54:33 [debug] 75#75: malloc: 00007FC65A246020:237568
2020/11/06 18:54:33 [debug] 75#75: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 77#77: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 77#77: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 77#77: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 77#77: epoll add event: fd:17 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 77#77: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 77#77: worker cycle
2020/11/06 18:54:33 [debug] 77#77: epoll timer: -1
2020/11/06 18:54:33 [debug] 77#77: epoll: fd:17 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 77#77: channel handler
2020/11/06 18:54:33 [debug] 77#77: channel: 32
2020/11/06 18:54:33 [debug] 77#77: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: get channel s:5 pid:78 fd:9
2020/11/06 18:54:33 [debug] 77#77: channel: 32
2020/11/06 18:54:33 [debug] 77#77: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: get channel s:6 pid:79 fd:11
2020/11/06 18:54:33 [debug] 77#77: channel: 32
2020/11/06 18:54:33 [debug] 77#77: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: get channel s:7 pid:80 fd:13
2020/11/06 18:54:33 [debug] 77#77: channel: 32
2020/11/06 18:54:33 [debug] 77#77: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: get channel s:8 pid:81 fd:15
2020/11/06 18:54:33 [debug] 77#77: channel: 32
2020/11/06 18:54:33 [debug] 77#77: channel command: 1
2020/11/06 18:54:33 [debug] 77#77: get channel s:9 pid:82 fd:16
2020/11/06 18:54:33 [debug] 77#77: channel: -2
2020/11/06 18:54:33 [debug] 77#77: timer delta: 19
2020/11/06 18:54:33 [debug] 77#77: worker cycle
2020/11/06 18:54:33 [debug] 77#77: epoll timer: -1
2020/11/06 18:54:33 [debug] 74#74: get channel s:9 pid:82 fd:20
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 80#80: malloc: 0000562A14D60DE0:98304
2020/11/06 18:54:33 [debug] 79#79: channel: 32
2020/11/06 18:54:33 [debug] 79#79: channel command: 1
2020/11/06 18:54:33 [debug] 81#81: event timer add: -1: 0:64597
2020/11/06 18:54:33 [debug] 81#81: epoll timer: 0
2020/11/06 18:54:33 [debug] 74#74: channel: -2
2020/11/06 18:54:33 [debug] 75#75: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 82#82: notify eventfd: 7
2020/11/06 18:54:33 [debug] 82#82: eventfd: 8
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 76#76: get channel s:6 pid:79 fd:13
2020/11/06 18:54:33 [debug] 82#82: testing the EPOLLRDHUP flag: success
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 80#80: malloc: 0000562A14D78E00:98304
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 79#79: get channel s:9 pid:82 fd:13
2020/11/06 18:54:33 [debug] 79#79: channel: -2
2020/11/06 18:54:33 [debug] 81#81: epoll: fd:25 ev:0001 d:0000562A14D60DE0
2020/11/06 18:54:33 [debug] 79#79: timer delta: 16
2020/11/06 18:54:33 [debug] 80#80: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 81#81: channel handler
2020/11/06 18:54:33 [debug] 79#79: worker cycle
2020/11/06 18:54:33 [debug] 79#79: epoll timer: -1
2020/11/06 18:54:33 [debug] 80#80: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 81#81: channel: 32
2020/11/06 18:54:33 [debug] 81#81: channel command: 1
2020/11/06 18:54:33 [debug] 76#76: get channel s:7 pid:80 fd:14
2020/11/06 18:54:33 [debug] 80#80: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 81#81: get channel s:9 pid:82 fd:9
2020/11/06 18:54:33 [debug] 81#81: channel: -2
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 81#81: timer delta: 20
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 81#81: event timer del: -1: 64597
2020/11/06 18:54:33 [debug] 76#76: get channel s:8 pid:81 fd:19
2020/11/06 18:54:33 [debug] 76#76: channel: 32
2020/11/06 18:54:33 [debug] 76#76: channel command: 1
2020/11/06 18:54:33 [debug] 76#76: get channel s:9 pid:82 fd:20
2020/11/06 18:54:33 [debug] 76#76: channel: -2
2020/11/06 18:54:33 [debug] 81#81: http file cache expire
2020/11/06 18:54:33 [debug] 76#76: timer delta: 13
2020/11/06 18:54:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:54:33 [debug] 76#76: worker cycle
2020/11/06 18:54:33 [debug] 80#80: epoll add event: fd:23 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 80#80: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 80#80: worker cycle
2020/11/06 18:54:33 [debug] 80#80: epoll timer: -1
2020/11/06 18:54:33 [debug] 80#80: epoll: fd:23 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 80#80: channel handler
2020/11/06 18:54:33 [debug] 80#80: channel: 32
2020/11/06 18:54:33 [debug] 80#80: channel command: 1
2020/11/06 18:54:33 [debug] 80#80: get channel s:8 pid:81 fd:9
2020/11/06 18:54:33 [debug] 80#80: channel: 32
2020/11/06 18:54:33 [debug] 80#80: channel command: 1
2020/11/06 18:54:33 [debug] 80#80: get channel s:9 pid:82 fd:11
2020/11/06 18:54:33 [debug] 80#80: channel: -2
2020/11/06 18:54:33 [debug] 80#80: timer delta: 21
2020/11/06 18:54:33 [debug] 80#80: worker cycle
2020/11/06 18:54:33 [debug] 80#80: epoll timer: -1
2020/11/06 18:54:33 [debug] 82#82: malloc: 0000562A14CF1E80:6144
2020/11/06 18:54:33 [debug] 74#74: timer delta: 3
2020/11/06 18:54:33 [debug] 75#75: epoll add event: fd:6 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 75#75: epoll add event: fd:7 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 75#75: epoll add event: fd:8 op:1 ev:10000001
2020/11/06 18:54:33 [debug] 75#75: epoll add event: fd:13 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 75#75: setproctitle: "nginx: worker process"
2020/11/06 18:54:33 [debug] 75#75: worker cycle
2020/11/06 18:54:33 [debug] 75#75: epoll timer: -1
2020/11/06 18:54:33 [debug] 75#75: epoll: fd:13 ev:0001 d:00007FC65A2462D8
2020/11/06 18:54:33 [debug] 75#75: channel handler
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:3 pid:76 fd:9
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:4 pid:77 fd:11
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 74#74: worker cycle
2020/11/06 18:54:33 [debug] 75#75: get channel s:5 pid:78 fd:12
2020/11/06 18:54:33 [debug] 74#74: epoll timer: -1
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:6 pid:79 fd:17
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:7 pid:80 fd:18
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:8 pid:81 fd:19
2020/11/06 18:54:33 [debug] 75#75: channel: 32
2020/11/06 18:54:33 [debug] 75#75: channel command: 1
2020/11/06 18:54:33 [debug] 75#75: get channel s:9 pid:82 fd:20
2020/11/06 18:54:33 [debug] 75#75: channel: -2
2020/11/06 18:54:33 [debug] 75#75: timer delta: 21
2020/11/06 18:54:33 [debug] 75#75: worker cycle
2020/11/06 18:54:33 [debug] 75#75: epoll timer: -1
2020/11/06 18:54:33 [debug] 81#81: shmtx lock
2020/11/06 18:54:33 [debug] 81#81: shmtx unlock
2020/11/06 18:54:33 [debug] 81#81: shmtx lock
2020/11/06 18:54:33 [debug] 81#81: shmtx unlock
2020/11/06 18:54:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:54:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:54:33 [debug] 81#81: event timer add: -1: 10000:74619
2020/11/06 18:54:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:54:33 [debug] 76#76: epoll timer: -1
2020/11/06 18:54:33 [debug] 82#82: malloc: 0000562A14D60DE0:118784
2020/11/06 18:54:33 [debug] 82#82: malloc: 0000562A14D7DE00:49152
2020/11/06 18:54:33 [debug] 82#82: malloc: 0000562A14D89E20:49152
2020/11/06 18:54:33 [debug] 82#82: epoll add event: fd:27 op:1 ev:00002001
2020/11/06 18:54:33 [debug] 82#82: setproctitle: "nginx: cache loader process"
2020/11/06 18:54:33 [debug] 82#82: event timer add: -1: 60000:124597
2020/11/06 18:54:33 [debug] 82#82: epoll timer: 60000
2020/11/06 18:54:43 [debug] 81#81: timer delta: 10002
2020/11/06 18:54:43 [debug] 81#81: event timer del: -1: 74619
2020/11/06 18:54:43 [debug] 81#81: http file cache expire
2020/11/06 18:54:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:54:43 [debug] 81#81: shmtx lock
2020/11/06 18:54:43 [debug] 81#81: shmtx unlock
2020/11/06 18:54:43 [debug] 81#81: shmtx lock
2020/11/06 18:54:43 [debug] 81#81: shmtx unlock
2020/11/06 18:54:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:54:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:54:43 [debug] 81#81: event timer add: -1: 10000:84621
2020/11/06 18:54:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:54:53 [debug] 81#81: timer delta: 10002
2020/11/06 18:54:53 [debug] 81#81: event timer del: -1: 84621
2020/11/06 18:54:53 [debug] 81#81: http file cache expire
2020/11/06 18:54:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:54:53 [debug] 81#81: shmtx lock
2020/11/06 18:54:53 [debug] 81#81: shmtx unlock
2020/11/06 18:54:53 [debug] 81#81: shmtx lock
2020/11/06 18:54:53 [debug] 81#81: shmtx unlock
2020/11/06 18:54:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:54:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:54:53 [debug] 81#81: event timer add: -1: 10000:94623
2020/11/06 18:54:53 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:02 [debug] 73#73: epoll: fd:6 ev:0001 d:00007FC65A246020
2020/11/06 18:55:02 [debug] 73#73: accept on 0.0.0.0:3128, ready: 0
2020/11/06 18:55:02 [debug] 73#73: posix_memalign: 0000562A14D4EBA0:512 @16
2020/11/06 18:55:02 [debug] 73#73: *1 accept: 172.17.0.1:46266 fd:21
2020/11/06 18:55:02 [debug] 73#73: *1 event timer add: 21: 60000:153490
2020/11/06 18:55:02 [debug] 73#73: *1 reusable connection: 1
2020/11/06 18:55:02 [debug] 73#73: *1 epoll add event: fd:21 op:1 ev:80002001
2020/11/06 18:55:02 [debug] 73#73: timer delta: 28877
2020/11/06 18:55:02 [debug] 73#73: worker cycle
2020/11/06 18:55:02 [debug] 73#73: epoll timer: 60000
2020/11/06 18:55:02 [debug] 73#73: epoll: fd:21 ev:0001 d:00007FC65A2463C0
2020/11/06 18:55:02 [debug] 73#73: *1 http wait request handler
2020/11/06 18:55:02 [debug] 73#73: *1 malloc: 0000562A14D4E020:1024
2020/11/06 18:55:02 [debug] 73#73: *1 recv: eof:0, avail:-1
2020/11/06 18:55:02 [debug] 73#73: *1 recv: fd:21 251 of 1024
2020/11/06 18:55:02 [debug] 73#73: *1 reusable connection: 0
2020/11/06 18:55:02 [debug] 73#73: *1 posix_memalign: 0000562A14CF36A0:4096 @16
2020/11/06 18:55:02 [debug] 73#73: *1 http process request line
2020/11/06 18:55:02 [debug] 73#73: *1 http request line: "GET /v2/ HTTP/1.1"
2020/11/06 18:55:02 [debug] 73#73: *1 http uri: "/v2/"
2020/11/06 18:55:02 [debug] 73#73: *1 http args: ""
2020/11/06 18:55:02 [debug] 73#73: *1 http exten: ""
2020/11/06 18:55:02 [debug] 73#73: *1 posix_memalign: 0000562A14D90E20:4096 @16
2020/11/06 18:55:02 [debug] 73#73: *1 http process request header line
2020/11/06 18:55:02 [debug] 73#73: *1 http header: "accept-encoding: gzip"
2020/11/06 18:55:02 [debug] 73#73: *1 http header: "connection: close"
2020/11/06 18:55:02 [debug] 73#73: *1 http header: "host: registry.me:3128"
2020/11/06 18:55:02 [debug] 73#73: *1 http header: "user-agent: docker/20.10.0-beta1 go/go1.13.15 git-commit/9c15e82 kernel/5.4.39-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.0-beta1 \(darwin\))"
2020/11/06 18:55:02 [debug] 73#73: *1 http header done
2020/11/06 18:55:02 [debug] 73#73: *1 event timer del: 21: 153490
2020/11/06 18:55:02 [debug] 73#73: *1 generic phase: 0
2020/11/06 18:55:02 [debug] 73#73: *1 generic phase: 1
2020/11/06 18:55:02 [debug] 73#73: *1 rewrite phase: 2
2020/11/06 18:55:02 [debug] 73#73: *1 http script value: "unknown-connect"
2020/11/06 18:55:02 [debug] 73#73: *1 http script set $docker_proxy_request_type
2020/11/06 18:55:02 [debug] 73#73: *1 test location: "/"
2020/11/06 18:55:02 [debug] 73#73: *1 test location: "setup/systemd"
2020/11/06 18:55:02 [debug] 73#73: *1 using configuration "/"
2020/11/06 18:55:02 [debug] 73#73: *1 http cl:-1 max:1048576
2020/11/06 18:55:02 [debug] 73#73: *1 rewrite phase: 4
2020/11/06 18:55:02 [debug] 73#73: *1 http set discard body
2020/11/06 18:55:02 [debug] 73#73: *1 HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Fri, 06 Nov 2020 18:55:02 GMT
Content-Type: application/octet-stream
Content-Length: 59
Connection: close
Content-type: text/plain

2020/11/06 18:55:02 [debug] 73#73: *1 write new buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *1 http write filter: l:0 f:0 s:183
2020/11/06 18:55:02 [debug] 73#73: *1 http output filter "/v2/?"
2020/11/06 18:55:02 [debug] 73#73: *1 http copy filter: "/v2/?"
2020/11/06 18:55:02 [debug] 73#73: *1 http postpone filter "/v2/?" 00007FFD0EC21F90
2020/11/06 18:55:02 [debug] 73#73: *1 write old buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *1 write new buf t:0 f:0 0000000000000000, pos 0000562A14D0C9BF, size: 59 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *1 http write filter: l:1 f:0 s:242
2020/11/06 18:55:02 [debug] 73#73: *1 http write filter limit 0
2020/11/06 18:55:02 [debug] 73#73: *1 writev: 242 of 242
2020/11/06 18:55:02 [debug] 73#73: *1 http write filter 0000000000000000
2020/11/06 18:55:02 [debug] 73#73: *1 http copy filter: 0 "/v2/?"
2020/11/06 18:55:02 [debug] 73#73: *1 http finalize request: 0, "/v2/?" a:1, c:1
2020/11/06 18:55:02 [debug] 73#73: *1 http request count:1 blk:0
2020/11/06 18:55:02 [debug] 73#73: *1 http close request
2020/11/06 18:55:02 [debug] 73#73: *1 http log handler
2020/11/06 18:55:02 [debug] 73#73: *1 free: 0000562A14CF36A0, unused: 32
2020/11/06 18:55:02 [debug] 73#73: *1 free: 0000562A14D90E20, unused: 2280
2020/11/06 18:55:02 [debug] 73#73: *1 close http connection: 21
2020/11/06 18:55:02 [debug] 73#73: *1 reusable connection: 0
2020/11/06 18:55:02 [debug] 73#73: *1 free: 0000562A14D4E020
2020/11/06 18:55:02 [debug] 73#73: *1 free: 0000562A14D4EBA0, unused: 136
2020/11/06 18:55:02 [debug] 73#73: timer delta: 0
2020/11/06 18:55:02 [debug] 73#73: worker cycle
2020/11/06 18:55:02 [debug] 73#73: epoll timer: -1
2020/11/06 18:55:02 [debug] 73#73: epoll: fd:6 ev:0001 d:00007FC65A246020
2020/11/06 18:55:02 [debug] 73#73: accept on 0.0.0.0:3128, ready: 0
2020/11/06 18:55:02 [debug] 73#73: posix_memalign: 0000562A14D4EBA0:512 @16
2020/11/06 18:55:02 [debug] 73#73: *2 accept: 172.17.0.1:46272 fd:21
2020/11/06 18:55:02 [debug] 73#73: *2 event timer add: 21: 60000:153498
2020/11/06 18:55:02 [debug] 73#73: *2 reusable connection: 1
2020/11/06 18:55:02 [debug] 73#73: *2 epoll add event: fd:21 op:1 ev:80002001
2020/11/06 18:55:02 [debug] 73#73: timer delta: 8
2020/11/06 18:55:02 [debug] 73#73: worker cycle
2020/11/06 18:55:02 [debug] 73#73: epoll timer: 60000
2020/11/06 18:55:02 [debug] 73#73: epoll: fd:21 ev:0001 d:00007FC65A2463C1
2020/11/06 18:55:02 [debug] 73#73: *2 http wait request handler
2020/11/06 18:55:02 [debug] 73#73: *2 malloc: 0000562A14CE1220:1024
2020/11/06 18:55:02 [debug] 73#73: *2 recv: eof:0, avail:-1
2020/11/06 18:55:02 [debug] 73#73: *2 recv: fd:21 607 of 1024
2020/11/06 18:55:02 [debug] 73#73: *2 reusable connection: 0
2020/11/06 18:55:02 [debug] 73#73: *2 posix_memalign: 0000562A14CF36A0:4096 @16
2020/11/06 18:55:02 [debug] 73#73: *2 http process request line
2020/11/06 18:55:02 [debug] 73#73: *2 http request line: "GET /v2/library/rabbitmq/manifests/latest HTTP/1.1"
2020/11/06 18:55:02 [debug] 73#73: *2 http uri: "/v2/library/rabbitmq/manifests/latest"
2020/11/06 18:55:02 [debug] 73#73: *2 http args: ""
2020/11/06 18:55:02 [debug] 73#73: *2 http exten: ""
2020/11/06 18:55:02 [debug] 73#73: *2 posix_memalign: 0000562A14D90E20:4096 @16
2020/11/06 18:55:02 [debug] 73#73: *2 http process request header line
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/json"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/vnd.docker.distribution.manifest.v2+json"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/vnd.docker.distribution.manifest.list.v2+json"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/vnd.oci.image.index.v1+json"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/vnd.oci.image.manifest.v1+json"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept: application/vnd.docker.distribution.manifest.v1+prettyjws"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "accept-encoding: gzip"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "connection: close"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "host: registry.me:3128"
2020/11/06 18:55:02 [debug] 73#73: *2 http header: "user-agent: docker/20.10.0-beta1 go/go1.13.15 git-commit/9c15e82 kernel/5.4.39-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.0-beta1 \(darwin\))"
2020/11/06 18:55:02 [debug] 73#73: *2 http header done
2020/11/06 18:55:02 [debug] 73#73: *2 event timer del: 21: 153498
2020/11/06 18:55:02 [debug] 73#73: *2 generic phase: 0
2020/11/06 18:55:02 [debug] 73#73: *2 generic phase: 1
2020/11/06 18:55:02 [debug] 73#73: *2 rewrite phase: 2
2020/11/06 18:55:02 [debug] 73#73: *2 http script value: "unknown-connect"
2020/11/06 18:55:02 [debug] 73#73: *2 http script set $docker_proxy_request_type
2020/11/06 18:55:02 [debug] 73#73: *2 test location: "/"
2020/11/06 18:55:02 [debug] 73#73: *2 test location: "setup/systemd"
2020/11/06 18:55:02 [debug] 73#73: *2 using configuration "/"
2020/11/06 18:55:02 [debug] 73#73: *2 http cl:-1 max:1048576
2020/11/06 18:55:02 [debug] 73#73: *2 rewrite phase: 4
2020/11/06 18:55:02 [debug] 73#73: *2 http set discard body
2020/11/06 18:55:02 [debug] 73#73: *2 HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Fri, 06 Nov 2020 18:55:02 GMT
Content-Type: application/octet-stream
Content-Length: 59
Connection: close
Content-type: text/plain

2020/11/06 18:55:02 [debug] 73#73: *2 write new buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *2 http write filter: l:0 f:0 s:183
2020/11/06 18:55:02 [debug] 73#73: *2 http output filter "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:02 [debug] 73#73: *2 http copy filter: "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:02 [debug] 73#73: *2 http postpone filter "/v2/library/rabbitmq/manifests/latest?" 00007FFD0EC21F90
2020/11/06 18:55:02 [debug] 73#73: *2 write old buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *2 write new buf t:0 f:0 0000000000000000, pos 0000562A14D0C9BF, size: 59 file: 0, size: 0
2020/11/06 18:55:02 [debug] 73#73: *2 http write filter: l:1 f:0 s:242
2020/11/06 18:55:02 [debug] 73#73: *2 http write filter limit 0
2020/11/06 18:55:02 [debug] 73#73: *2 writev: 242 of 242
2020/11/06 18:55:02 [debug] 73#73: *2 http write filter 0000000000000000
2020/11/06 18:55:02 [debug] 73#73: *2 http copy filter: 0 "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:02 [debug] 73#73: *2 http finalize request: 0, "/v2/library/rabbitmq/manifests/latest?" a:1, c:1
2020/11/06 18:55:02 [debug] 73#73: *2 http request count:1 blk:0
2020/11/06 18:55:02 [debug] 73#73: *2 http close request
2020/11/06 18:55:02 [debug] 73#73: *2 http log handler
2020/11/06 18:55:02 [debug] 73#73: *2 free: 0000562A14CF36A0, unused: 8
2020/11/06 18:55:02 [debug] 73#73: *2 free: 0000562A14D90E20, unused: 2264
2020/11/06 18:55:02 [debug] 73#73: *2 close http connection: 21
2020/11/06 18:55:02 [debug] 73#73: *2 reusable connection: 0
2020/11/06 18:55:02 [debug] 73#73: *2 free: 0000562A14CE1220
2020/11/06 18:55:02 [debug] 73#73: *2 free: 0000562A14D4EBA0, unused: 136
2020/11/06 18:55:02 [debug] 73#73: timer delta: 0
2020/11/06 18:55:02 [debug] 73#73: worker cycle
2020/11/06 18:55:02 [debug] 73#73: epoll timer: -1
2020/11/06 18:55:03 [debug] 81#81: timer delta: 10005
2020/11/06 18:55:03 [debug] 81#81: event timer del: -1: 94623
2020/11/06 18:55:03 [debug] 81#81: http file cache expire
2020/11/06 18:55:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:03 [debug] 81#81: shmtx lock
2020/11/06 18:55:03 [debug] 81#81: shmtx unlock
2020/11/06 18:55:03 [debug] 81#81: shmtx lock
2020/11/06 18:55:03 [debug] 81#81: shmtx unlock
2020/11/06 18:55:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:03 [debug] 81#81: event timer add: -1: 10000:104629
2020/11/06 18:55:03 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:13 [debug] 81#81: timer delta: 10002
2020/11/06 18:55:13 [debug] 81#81: event timer del: -1: 104629
2020/11/06 18:55:13 [debug] 81#81: http file cache expire
2020/11/06 18:55:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:13 [debug] 81#81: shmtx lock
2020/11/06 18:55:13 [debug] 81#81: shmtx unlock
2020/11/06 18:55:13 [debug] 81#81: shmtx lock
2020/11/06 18:55:13 [debug] 81#81: shmtx unlock
2020/11/06 18:55:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:13 [debug] 81#81: event timer add: -1: 10000:114631
2020/11/06 18:55:13 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:20 [debug] 73#73: epoll: fd:6 ev:0001 d:00007FC65A246020
2020/11/06 18:55:20 [debug] 73#73: accept on 0.0.0.0:3128, ready: 0
2020/11/06 18:55:20 [debug] 73#73: posix_memalign: 0000562A14D4EBA0:512 @16
2020/11/06 18:55:20 [debug] 73#73: *3 accept: 172.17.0.1:46330 fd:21
2020/11/06 18:55:20 [debug] 73#73: *3 event timer add: 21: 60000:171950
2020/11/06 18:55:20 [debug] 73#73: *3 reusable connection: 1
2020/11/06 18:55:20 [debug] 73#73: *3 epoll add event: fd:21 op:1 ev:80002001
2020/11/06 18:55:20 [debug] 73#73: timer delta: 18452
2020/11/06 18:55:20 [debug] 73#73: worker cycle
2020/11/06 18:55:20 [debug] 73#73: epoll timer: 60000
2020/11/06 18:55:20 [debug] 73#73: epoll: fd:21 ev:0001 d:00007FC65A2463C0
2020/11/06 18:55:20 [debug] 73#73: *3 http wait request handler
2020/11/06 18:55:20 [debug] 73#73: *3 malloc: 0000562A14D4E020:1024
2020/11/06 18:55:20 [debug] 73#73: *3 recv: eof:0, avail:-1
2020/11/06 18:55:20 [debug] 73#73: *3 recv: fd:21 251 of 1024
2020/11/06 18:55:20 [debug] 73#73: *3 reusable connection: 0
2020/11/06 18:55:20 [debug] 73#73: *3 posix_memalign: 0000562A14CF36A0:4096 @16
2020/11/06 18:55:20 [debug] 73#73: *3 http process request line
2020/11/06 18:55:20 [debug] 73#73: *3 http request line: "GET /v2/ HTTP/1.1"
2020/11/06 18:55:20 [debug] 73#73: *3 http uri: "/v2/"
2020/11/06 18:55:20 [debug] 73#73: *3 http args: ""
2020/11/06 18:55:20 [debug] 73#73: *3 http exten: ""
2020/11/06 18:55:20 [debug] 73#73: *3 posix_memalign: 0000562A14D90E20:4096 @16
2020/11/06 18:55:20 [debug] 73#73: *3 http process request header line
2020/11/06 18:55:20 [debug] 73#73: *3 http header: "accept-encoding: gzip"
2020/11/06 18:55:20 [debug] 73#73: *3 http header: "connection: close"
2020/11/06 18:55:20 [debug] 73#73: *3 http header: "host: registry.me:3128"
2020/11/06 18:55:20 [debug] 73#73: *3 http header: "user-agent: docker/20.10.0-beta1 go/go1.13.15 git-commit/9c15e82 kernel/5.4.39-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.0-beta1 \(darwin\))"
2020/11/06 18:55:20 [debug] 73#73: *3 http header done
2020/11/06 18:55:20 [debug] 73#73: *3 event timer del: 21: 171950
2020/11/06 18:55:20 [debug] 73#73: *3 generic phase: 0
2020/11/06 18:55:20 [debug] 73#73: *3 generic phase: 1
2020/11/06 18:55:20 [debug] 73#73: *3 rewrite phase: 2
2020/11/06 18:55:20 [debug] 73#73: *3 http script value: "unknown-connect"
2020/11/06 18:55:20 [debug] 73#73: *3 http script set $docker_proxy_request_type
2020/11/06 18:55:20 [debug] 73#73: *3 test location: "/"
2020/11/06 18:55:20 [debug] 73#73: *3 test location: "setup/systemd"
2020/11/06 18:55:20 [debug] 73#73: *3 using configuration "/"
2020/11/06 18:55:20 [debug] 73#73: *3 http cl:-1 max:1048576
2020/11/06 18:55:20 [debug] 73#73: *3 rewrite phase: 4
2020/11/06 18:55:20 [debug] 73#73: *3 http set discard body
2020/11/06 18:55:20 [debug] 73#73: *3 HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Fri, 06 Nov 2020 18:55:20 GMT
Content-Type: application/octet-stream
Content-Length: 59
Connection: close
Content-type: text/plain

2020/11/06 18:55:20 [debug] 73#73: *3 write new buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *3 http write filter: l:0 f:0 s:183
2020/11/06 18:55:20 [debug] 73#73: *3 http output filter "/v2/?"
2020/11/06 18:55:20 [debug] 73#73: *3 http copy filter: "/v2/?"
2020/11/06 18:55:20 [debug] 73#73: *3 http postpone filter "/v2/?" 00007FFD0EC21F90
2020/11/06 18:55:20 [debug] 73#73: *3 write old buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *3 write new buf t:0 f:0 0000000000000000, pos 0000562A14D0C9BF, size: 59 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *3 http write filter: l:1 f:0 s:242
2020/11/06 18:55:20 [debug] 73#73: *3 http write filter limit 0
2020/11/06 18:55:20 [debug] 73#73: *3 writev: 242 of 242
2020/11/06 18:55:20 [debug] 73#73: *3 http write filter 0000000000000000
2020/11/06 18:55:20 [debug] 73#73: *3 http copy filter: 0 "/v2/?"
2020/11/06 18:55:20 [debug] 73#73: *3 http finalize request: 0, "/v2/?" a:1, c:1
2020/11/06 18:55:20 [debug] 73#73: *3 http request count:1 blk:0
2020/11/06 18:55:20 [debug] 73#73: *3 http close request
2020/11/06 18:55:20 [debug] 73#73: *3 http log handler
2020/11/06 18:55:20 [debug] 73#73: *3 free: 0000562A14CF36A0, unused: 32
2020/11/06 18:55:20 [debug] 73#73: *3 free: 0000562A14D90E20, unused: 2280
2020/11/06 18:55:20 [debug] 73#73: *3 close http connection: 21
2020/11/06 18:55:20 [debug] 73#73: *3 reusable connection: 0
2020/11/06 18:55:20 [debug] 73#73: *3 free: 0000562A14D4E020
2020/11/06 18:55:20 [debug] 73#73: *3 free: 0000562A14D4EBA0, unused: 136
2020/11/06 18:55:20 [debug] 73#73: timer delta: 0
2020/11/06 18:55:20 [debug] 73#73: worker cycle
2020/11/06 18:55:20 [debug] 73#73: epoll timer: -1
2020/11/06 18:55:20 [debug] 73#73: epoll: fd:6 ev:0001 d:00007FC65A246020
2020/11/06 18:55:20 [debug] 73#73: accept on 0.0.0.0:3128, ready: 0
2020/11/06 18:55:20 [debug] 73#73: posix_memalign: 0000562A14D4EBA0:512 @16
2020/11/06 18:55:20 [debug] 73#73: *4 accept: 172.17.0.1:46336 fd:21
2020/11/06 18:55:20 [debug] 73#73: *4 event timer add: 21: 60000:171956
2020/11/06 18:55:20 [debug] 73#73: *4 reusable connection: 1
2020/11/06 18:55:20 [debug] 73#73: *4 epoll add event: fd:21 op:1 ev:80002001
2020/11/06 18:55:20 [debug] 73#73: timer delta: 6
2020/11/06 18:55:20 [debug] 73#73: worker cycle
2020/11/06 18:55:20 [debug] 73#73: epoll timer: 60000
2020/11/06 18:55:20 [debug] 73#73: epoll: fd:21 ev:0001 d:00007FC65A2463C1
2020/11/06 18:55:20 [debug] 73#73: *4 http wait request handler
2020/11/06 18:55:20 [debug] 73#73: *4 malloc: 0000562A14CE1220:1024
2020/11/06 18:55:20 [debug] 73#73: *4 recv: eof:0, avail:-1
2020/11/06 18:55:20 [debug] 73#73: *4 recv: fd:21 607 of 1024
2020/11/06 18:55:20 [debug] 73#73: *4 reusable connection: 0
2020/11/06 18:55:20 [debug] 73#73: *4 posix_memalign: 0000562A14CF36A0:4096 @16
2020/11/06 18:55:20 [debug] 73#73: *4 http process request line
2020/11/06 18:55:20 [debug] 73#73: *4 http request line: "GET /v2/library/rabbitmq/manifests/latest HTTP/1.1"
2020/11/06 18:55:20 [debug] 73#73: *4 http uri: "/v2/library/rabbitmq/manifests/latest"
2020/11/06 18:55:20 [debug] 73#73: *4 http args: ""
2020/11/06 18:55:20 [debug] 73#73: *4 http exten: ""
2020/11/06 18:55:20 [debug] 73#73: *4 posix_memalign: 0000562A14D90E20:4096 @16
2020/11/06 18:55:20 [debug] 73#73: *4 http process request header line
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/vnd.docker.distribution.manifest.v1+prettyjws"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/json"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/vnd.docker.distribution.manifest.v2+json"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/vnd.docker.distribution.manifest.list.v2+json"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/vnd.oci.image.index.v1+json"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept: application/vnd.oci.image.manifest.v1+json"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "accept-encoding: gzip"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "connection: close"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "host: registry.me:3128"
2020/11/06 18:55:20 [debug] 73#73: *4 http header: "user-agent: docker/20.10.0-beta1 go/go1.13.15 git-commit/9c15e82 kernel/5.4.39-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.0-beta1 \(darwin\))"
2020/11/06 18:55:20 [debug] 73#73: *4 http header done
2020/11/06 18:55:20 [debug] 73#73: *4 event timer del: 21: 171956
2020/11/06 18:55:20 [debug] 73#73: *4 generic phase: 0
2020/11/06 18:55:20 [debug] 73#73: *4 generic phase: 1
2020/11/06 18:55:20 [debug] 73#73: *4 rewrite phase: 2
2020/11/06 18:55:20 [debug] 73#73: *4 http script value: "unknown-connect"
2020/11/06 18:55:20 [debug] 73#73: *4 http script set $docker_proxy_request_type
2020/11/06 18:55:20 [debug] 73#73: *4 test location: "/"
2020/11/06 18:55:20 [debug] 73#73: *4 test location: "setup/systemd"
2020/11/06 18:55:20 [debug] 73#73: *4 using configuration "/"
2020/11/06 18:55:20 [debug] 73#73: *4 http cl:-1 max:1048576
2020/11/06 18:55:20 [debug] 73#73: *4 rewrite phase: 4
2020/11/06 18:55:20 [debug] 73#73: *4 http set discard body
2020/11/06 18:55:20 [debug] 73#73: *4 HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Fri, 06 Nov 2020 18:55:20 GMT
Content-Type: application/octet-stream
Content-Length: 59
Connection: close
Content-type: text/plain

2020/11/06 18:55:20 [debug] 73#73: *4 write new buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *4 http write filter: l:0 f:0 s:183
2020/11/06 18:55:20 [debug] 73#73: *4 http output filter "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:20 [debug] 73#73: *4 http copy filter: "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:20 [debug] 73#73: *4 http postpone filter "/v2/library/rabbitmq/manifests/latest?" 00007FFD0EC21F90
2020/11/06 18:55:20 [debug] 73#73: *4 write old buf t:1 f:0 0000562A14D913E0, pos 0000562A14D913E0, size: 183 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *4 write new buf t:0 f:0 0000000000000000, pos 0000562A14D0C9BF, size: 59 file: 0, size: 0
2020/11/06 18:55:20 [debug] 73#73: *4 http write filter: l:1 f:0 s:242
2020/11/06 18:55:20 [debug] 73#73: *4 http write filter limit 0
2020/11/06 18:55:20 [debug] 73#73: *4 writev: 242 of 242
2020/11/06 18:55:20 [debug] 73#73: *4 http write filter 0000000000000000
2020/11/06 18:55:20 [debug] 73#73: *4 http copy filter: 0 "/v2/library/rabbitmq/manifests/latest?"
2020/11/06 18:55:20 [debug] 73#73: *4 http finalize request: 0, "/v2/library/rabbitmq/manifests/latest?" a:1, c:1
2020/11/06 18:55:20 [debug] 73#73: *4 http request count:1 blk:0
2020/11/06 18:55:20 [debug] 73#73: *4 http close request
2020/11/06 18:55:20 [debug] 73#73: *4 http log handler
2020/11/06 18:55:20 [debug] 73#73: *4 free: 0000562A14CF36A0, unused: 8
2020/11/06 18:55:20 [debug] 73#73: *4 free: 0000562A14D90E20, unused: 2264
2020/11/06 18:55:20 [debug] 73#73: *4 close http connection: 21
2020/11/06 18:55:20 [debug] 73#73: *4 reusable connection: 0
2020/11/06 18:55:20 [debug] 73#73: *4 free: 0000562A14CE1220
2020/11/06 18:55:20 [debug] 73#73: *4 free: 0000562A14D4EBA0, unused: 136
2020/11/06 18:55:20 [debug] 73#73: timer delta: 0
2020/11/06 18:55:20 [debug] 73#73: worker cycle
2020/11/06 18:55:20 [debug] 73#73: epoll timer: -1
2020/11/06 18:55:23 [debug] 81#81: timer delta: 10001
2020/11/06 18:55:23 [debug] 81#81: event timer del: -1: 114631
2020/11/06 18:55:23 [debug] 81#81: http file cache expire
2020/11/06 18:55:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:23 [debug] 81#81: shmtx lock
2020/11/06 18:55:23 [debug] 81#81: shmtx unlock
2020/11/06 18:55:23 [debug] 81#81: shmtx lock
2020/11/06 18:55:23 [debug] 81#81: shmtx unlock
2020/11/06 18:55:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:23 [debug] 81#81: event timer add: -1: 10000:124632
2020/11/06 18:55:23 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:33 [debug] 81#81: timer delta: 10001
2020/11/06 18:55:33 [debug] 81#81: event timer del: -1: 124632
2020/11/06 18:55:33 [debug] 81#81: http file cache expire
2020/11/06 18:55:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:33 [debug] 81#81: shmtx lock
2020/11/06 18:55:33 [debug] 81#81: shmtx unlock
2020/11/06 18:55:33 [debug] 81#81: shmtx lock
2020/11/06 18:55:33 [debug] 81#81: shmtx unlock
2020/11/06 18:55:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:33 [debug] 81#81: event timer add: -1: 10000:134633
2020/11/06 18:55:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:33 [debug] 82#82: timer delta: 60043
2020/11/06 18:55:33 [debug] 82#82: event timer del: -1: 124597
2020/11/06 18:55:33 [debug] 82#82: http file cache loader
2020/11/06 18:55:33 [debug] 82#82: walk tree "/docker_mirror_cache"
2020/11/06 18:55:33 [debug] 82#82: tree name 1:"."
2020/11/06 18:55:33 [debug] 82#82: tree name 2:".."
2020/11/06 18:55:33 [notice] 82#82: http file cache: /docker_mirror_cache 0.000M, bsize: 1048576
2020/11/06 18:55:33 [notice] 72#72: signal 17 (SIGCHLD) received from 82
2020/11/06 18:55:33 [notice] 72#72: cache loader process 82 exited with code 0
2020/11/06 18:55:33 [debug] 72#72: shmtx forced unlock
2020/11/06 18:55:33 [debug] 72#72: shmtx forced unlock
2020/11/06 18:55:33 [debug] 72#72: wake up, sigio 0
2020/11/06 18:55:33 [debug] 72#72: reap children
2020/11/06 18:55:33 [debug] 72#72: child: 0 73 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 1 74 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 2 75 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 3 76 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 4 77 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 5 78 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 6 79 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 7 80 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 8 81 e:0 t:0 d:0 r:1 j:0
2020/11/06 18:55:33 [debug] 72#72: child: 9 82 e:0 t:1 d:0 r:0 j:0
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:73
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:74
2020/11/06 18:55:33 [debug] 73#73: epoll: fd:9 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:75
2020/11/06 18:55:33 [debug] 73#73: channel handler
2020/11/06 18:55:33 [debug] 73#73: channel: 32
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:76
2020/11/06 18:55:33 [debug] 73#73: channel command: 2
2020/11/06 18:55:33 [debug] 73#73: close channel s:9 pid:82 our:82 fd:20
2020/11/06 18:55:33 [debug] 73#73: channel: -2
2020/11/06 18:55:33 [debug] 73#73: timer delta: 12700
2020/11/06 18:55:33 [debug] 73#73: worker cycle
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:77
2020/11/06 18:55:33 [debug] 73#73: epoll timer: -1
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:78
2020/11/06 18:55:33 [debug] 76#76: epoll: fd:15 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 74#74: epoll: fd:11 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 76#76: channel handler
2020/11/06 18:55:33 [debug] 77#77: epoll: fd:17 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 74#74: channel handler
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:79
2020/11/06 18:55:33 [debug] 77#77: channel handler
2020/11/06 18:55:33 [debug] 76#76: channel: 32
2020/11/06 18:55:33 [debug] 76#76: channel command: 2
2020/11/06 18:55:33 [debug] 74#74: channel: 32
2020/11/06 18:55:33 [debug] 76#76: close channel s:9 pid:82 our:82 fd:20
2020/11/06 18:55:33 [debug] 77#77: channel: 32
2020/11/06 18:55:33 [debug] 74#74: channel command: 2
2020/11/06 18:55:33 [debug] 77#77: channel command: 2
2020/11/06 18:55:33 [debug] 76#76: channel: -2
2020/11/06 18:55:33 [debug] 74#74: close channel s:9 pid:82 our:82 fd:20
2020/11/06 18:55:33 [debug] 77#77: close channel s:9 pid:82 our:82 fd:16
2020/11/06 18:55:33 [debug] 76#76: timer delta: 60046
2020/11/06 18:55:33 [debug] 76#76: worker cycle
2020/11/06 18:55:33 [debug] 74#74: channel: -2
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:80
2020/11/06 18:55:33 [debug] 74#74: timer delta: 60043
2020/11/06 18:55:33 [debug] 77#77: channel: -2
2020/11/06 18:55:33 [debug] 76#76: epoll timer: -1
2020/11/06 18:55:33 [debug] 74#74: worker cycle
2020/11/06 18:55:33 [debug] 77#77: timer delta: 60040
2020/11/06 18:55:33 [debug] 72#72: pass close channel s:9 pid:82 to:81
2020/11/06 18:55:33 [debug] 74#74: epoll timer: -1
2020/11/06 18:55:33 [debug] 77#77: worker cycle
2020/11/06 18:55:33 [debug] 72#72: sigsuspend
2020/11/06 18:55:33 [debug] 77#77: epoll timer: -1
2020/11/06 18:55:33 [notice] 72#72: signal 29 (SIGIO) received
2020/11/06 18:55:33 [debug] 79#79: epoll: fd:21 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 72#72: wake up, sigio 0
2020/11/06 18:55:33 [debug] 79#79: channel handler
2020/11/06 18:55:33 [debug] 72#72: sigsuspend
2020/11/06 18:55:33 [debug] 78#78: epoll: fd:19 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 79#79: channel: 32
2020/11/06 18:55:33 [debug] 78#78: channel handler
2020/11/06 18:55:33 [debug] 78#78: channel: 32
2020/11/06 18:55:33 [debug] 80#80: epoll: fd:23 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 79#79: channel command: 2
2020/11/06 18:55:33 [debug] 78#78: channel command: 2
2020/11/06 18:55:33 [debug] 81#81: epoll: fd:25 ev:0001 d:0000562A14D60DE0
2020/11/06 18:55:33 [debug] 79#79: close channel s:9 pid:82 our:82 fd:13
2020/11/06 18:55:33 [debug] 81#81: channel handler
2020/11/06 18:55:33 [debug] 79#79: channel: -2
2020/11/06 18:55:33 [debug] 80#80: channel handler
2020/11/06 18:55:33 [debug] 81#81: channel: 32
2020/11/06 18:55:33 [debug] 81#81: channel command: 2
2020/11/06 18:55:33 [debug] 79#79: timer delta: 60043
2020/11/06 18:55:33 [debug] 81#81: close channel s:9 pid:82 our:82 fd:9
2020/11/06 18:55:33 [debug] 79#79: worker cycle
2020/11/06 18:55:33 [debug] 79#79: epoll timer: -1
2020/11/06 18:55:33 [debug] 78#78: close channel s:9 pid:82 our:82 fd:15
2020/11/06 18:55:33 [debug] 75#75: epoll: fd:13 ev:0001 d:00007FC65A2462D8
2020/11/06 18:55:33 [debug] 80#80: channel: 32
2020/11/06 18:55:33 [debug] 75#75: channel handler
2020/11/06 18:55:33 [debug] 81#81: channel: -2
2020/11/06 18:55:33 [debug] 78#78: channel: -2
2020/11/06 18:55:33 [debug] 81#81: timer delta: 23
2020/11/06 18:55:33 [debug] 80#80: channel command: 2
2020/11/06 18:55:33 [debug] 81#81: epoll timer: 9977
2020/11/06 18:55:33 [debug] 78#78: timer delta: 60042
2020/11/06 18:55:33 [debug] 78#78: worker cycle
2020/11/06 18:55:33 [debug] 75#75: channel: 32
2020/11/06 18:55:33 [debug] 78#78: epoll timer: -1
2020/11/06 18:55:33 [debug] 75#75: channel command: 2
2020/11/06 18:55:33 [debug] 80#80: close channel s:9 pid:82 our:82 fd:11
2020/11/06 18:55:33 [debug] 75#75: close channel s:9 pid:82 our:82 fd:20
2020/11/06 18:55:33 [debug] 80#80: channel: -2
2020/11/06 18:55:33 [debug] 75#75: channel: -2
2020/11/06 18:55:33 [debug] 80#80: timer delta: 60038
2020/11/06 18:55:33 [debug] 75#75: timer delta: 60038
2020/11/06 18:55:33 [debug] 80#80: worker cycle
2020/11/06 18:55:33 [debug] 75#75: worker cycle
2020/11/06 18:55:33 [debug] 80#80: epoll timer: -1
2020/11/06 18:55:33 [debug] 75#75: epoll timer: -1
2020/11/06 18:55:43 [debug] 81#81: timer delta: 9979
2020/11/06 18:55:43 [debug] 81#81: event timer del: -1: 134633
2020/11/06 18:55:43 [debug] 81#81: http file cache expire
2020/11/06 18:55:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:43 [debug] 81#81: shmtx lock
2020/11/06 18:55:43 [debug] 81#81: shmtx unlock
2020/11/06 18:55:43 [debug] 81#81: shmtx lock
2020/11/06 18:55:43 [debug] 81#81: shmtx unlock
2020/11/06 18:55:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:43 [debug] 81#81: event timer add: -1: 10000:144635
2020/11/06 18:55:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:55:53 [debug] 81#81: timer delta: 10001
2020/11/06 18:55:53 [debug] 81#81: event timer del: -1: 144635
2020/11/06 18:55:53 [debug] 81#81: http file cache expire
2020/11/06 18:55:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:55:53 [debug] 81#81: shmtx lock
2020/11/06 18:55:53 [debug] 81#81: shmtx unlock
2020/11/06 18:55:53 [debug] 81#81: shmtx lock
2020/11/06 18:55:53 [debug] 81#81: shmtx unlock
2020/11/06 18:55:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:55:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:55:53 [debug] 81#81: event timer add: -1: 10000:154636
2020/11/06 18:55:53 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:03 [debug] 81#81: timer delta: 10001
2020/11/06 18:56:03 [debug] 81#81: event timer del: -1: 154636
2020/11/06 18:56:03 [debug] 81#81: http file cache expire
2020/11/06 18:56:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:03 [debug] 81#81: shmtx lock
2020/11/06 18:56:03 [debug] 81#81: shmtx unlock
2020/11/06 18:56:03 [debug] 81#81: shmtx lock
2020/11/06 18:56:03 [debug] 81#81: shmtx unlock
2020/11/06 18:56:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:03 [debug] 81#81: event timer add: -1: 10000:164637
2020/11/06 18:56:03 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:13 [debug] 81#81: timer delta: 10002
2020/11/06 18:56:13 [debug] 81#81: event timer del: -1: 164637
2020/11/06 18:56:13 [debug] 81#81: http file cache expire
2020/11/06 18:56:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:13 [debug] 81#81: shmtx lock
2020/11/06 18:56:13 [debug] 81#81: shmtx unlock
2020/11/06 18:56:13 [debug] 81#81: shmtx lock
2020/11/06 18:56:13 [debug] 81#81: shmtx unlock
2020/11/06 18:56:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:13 [debug] 81#81: event timer add: -1: 10000:174639
2020/11/06 18:56:13 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:23 [debug] 81#81: timer delta: 10002
2020/11/06 18:56:23 [debug] 81#81: event timer del: -1: 174639
2020/11/06 18:56:23 [debug] 81#81: http file cache expire
2020/11/06 18:56:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:23 [debug] 81#81: shmtx lock
2020/11/06 18:56:23 [debug] 81#81: shmtx unlock
2020/11/06 18:56:23 [debug] 81#81: shmtx lock
2020/11/06 18:56:23 [debug] 81#81: shmtx unlock
2020/11/06 18:56:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:23 [debug] 81#81: event timer add: -1: 10000:184641
2020/11/06 18:56:23 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:33 [debug] 81#81: timer delta: 10000
2020/11/06 18:56:33 [debug] 81#81: event timer del: -1: 184641
2020/11/06 18:56:33 [debug] 81#81: http file cache expire
2020/11/06 18:56:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:33 [debug] 81#81: shmtx lock
2020/11/06 18:56:33 [debug] 81#81: shmtx unlock
2020/11/06 18:56:33 [debug] 81#81: shmtx lock
2020/11/06 18:56:33 [debug] 81#81: shmtx unlock
2020/11/06 18:56:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:33 [debug] 81#81: event timer add: -1: 10000:194641
2020/11/06 18:56:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:43 [debug] 81#81: timer delta: 10002
2020/11/06 18:56:43 [debug] 81#81: event timer del: -1: 194641
2020/11/06 18:56:43 [debug] 81#81: http file cache expire
2020/11/06 18:56:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:43 [debug] 81#81: shmtx lock
2020/11/06 18:56:43 [debug] 81#81: shmtx unlock
2020/11/06 18:56:43 [debug] 81#81: shmtx lock
2020/11/06 18:56:43 [debug] 81#81: shmtx unlock
2020/11/06 18:56:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:43 [debug] 81#81: event timer add: -1: 10000:204643
2020/11/06 18:56:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:56:53 [debug] 81#81: timer delta: 10002
2020/11/06 18:56:53 [debug] 81#81: event timer del: -1: 204643
2020/11/06 18:56:53 [debug] 81#81: http file cache expire
2020/11/06 18:56:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:56:53 [debug] 81#81: shmtx lock
2020/11/06 18:56:53 [debug] 81#81: shmtx unlock
2020/11/06 18:56:53 [debug] 81#81: shmtx lock
2020/11/06 18:56:53 [debug] 81#81: shmtx unlock
2020/11/06 18:56:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:56:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:56:53 [debug] 81#81: event timer add: -1: 10000:214645
2020/11/06 18:56:53 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:03 [debug] 81#81: timer delta: 10001
2020/11/06 18:57:03 [debug] 81#81: event timer del: -1: 214645
2020/11/06 18:57:03 [debug] 81#81: http file cache expire
2020/11/06 18:57:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:03 [debug] 81#81: shmtx lock
2020/11/06 18:57:03 [debug] 81#81: shmtx unlock
2020/11/06 18:57:03 [debug] 81#81: shmtx lock
2020/11/06 18:57:03 [debug] 81#81: shmtx unlock
2020/11/06 18:57:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:03 [debug] 81#81: event timer add: -1: 10000:224646
2020/11/06 18:57:03 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:13 [debug] 81#81: timer delta: 10001
2020/11/06 18:57:13 [debug] 81#81: event timer del: -1: 224646
2020/11/06 18:57:13 [debug] 81#81: http file cache expire
2020/11/06 18:57:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:13 [debug] 81#81: shmtx lock
2020/11/06 18:57:13 [debug] 81#81: shmtx unlock
2020/11/06 18:57:13 [debug] 81#81: shmtx lock
2020/11/06 18:57:13 [debug] 81#81: shmtx unlock
2020/11/06 18:57:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:13 [debug] 81#81: event timer add: -1: 10000:234647
2020/11/06 18:57:13 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:23 [debug] 81#81: timer delta: 10000
2020/11/06 18:57:23 [debug] 81#81: event timer del: -1: 234647
2020/11/06 18:57:23 [debug] 81#81: http file cache expire
2020/11/06 18:57:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:23 [debug] 81#81: shmtx lock
2020/11/06 18:57:23 [debug] 81#81: shmtx unlock
2020/11/06 18:57:23 [debug] 81#81: shmtx lock
2020/11/06 18:57:23 [debug] 81#81: shmtx unlock
2020/11/06 18:57:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:23 [debug] 81#81: event timer add: -1: 10000:244647
2020/11/06 18:57:23 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:33 [debug] 81#81: timer delta: 10001
2020/11/06 18:57:33 [debug] 81#81: event timer del: -1: 244647
2020/11/06 18:57:33 [debug] 81#81: http file cache expire
2020/11/06 18:57:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:33 [debug] 81#81: shmtx lock
2020/11/06 18:57:33 [debug] 81#81: shmtx unlock
2020/11/06 18:57:33 [debug] 81#81: shmtx lock
2020/11/06 18:57:33 [debug] 81#81: shmtx unlock
2020/11/06 18:57:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:33 [debug] 81#81: event timer add: -1: 10000:254648
2020/11/06 18:57:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:43 [debug] 81#81: timer delta: 10003
2020/11/06 18:57:43 [debug] 81#81: event timer del: -1: 254648
2020/11/06 18:57:43 [debug] 81#81: http file cache expire
2020/11/06 18:57:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:43 [debug] 81#81: shmtx lock
2020/11/06 18:57:43 [debug] 81#81: shmtx unlock
2020/11/06 18:57:43 [debug] 81#81: shmtx lock
2020/11/06 18:57:43 [debug] 81#81: shmtx unlock
2020/11/06 18:57:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:43 [debug] 81#81: event timer add: -1: 10000:264651
2020/11/06 18:57:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:57:53 [debug] 81#81: timer delta: 10004
2020/11/06 18:57:53 [debug] 81#81: event timer del: -1: 264651
2020/11/06 18:57:53 [debug] 81#81: http file cache expire
2020/11/06 18:57:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:57:53 [debug] 81#81: shmtx lock
2020/11/06 18:57:53 [debug] 81#81: shmtx unlock
2020/11/06 18:57:53 [debug] 81#81: shmtx lock
2020/11/06 18:57:53 [debug] 81#81: shmtx unlock
2020/11/06 18:57:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:57:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:57:53 [debug] 81#81: event timer add: -1: 10000:274655
2020/11/06 18:57:53 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:03 [debug] 81#81: timer delta: 10002
2020/11/06 18:58:03 [debug] 81#81: event timer del: -1: 274655
2020/11/06 18:58:03 [debug] 81#81: http file cache expire
2020/11/06 18:58:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:03 [debug] 81#81: shmtx lock
2020/11/06 18:58:03 [debug] 81#81: shmtx unlock
2020/11/06 18:58:03 [debug] 81#81: shmtx lock
2020/11/06 18:58:03 [debug] 81#81: shmtx unlock
2020/11/06 18:58:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:03 [debug] 81#81: event timer add: -1: 10000:284657
2020/11/06 18:58:03 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:13 [debug] 81#81: timer delta: 10001
2020/11/06 18:58:13 [debug] 81#81: event timer del: -1: 284657
2020/11/06 18:58:13 [debug] 81#81: http file cache expire
2020/11/06 18:58:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:13 [debug] 81#81: shmtx lock
2020/11/06 18:58:13 [debug] 81#81: shmtx unlock
2020/11/06 18:58:13 [debug] 81#81: shmtx lock
2020/11/06 18:58:13 [debug] 81#81: shmtx unlock
2020/11/06 18:58:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:13 [debug] 81#81: event timer add: -1: 10000:294659
2020/11/06 18:58:13 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:23 [debug] 81#81: timer delta: 10001
2020/11/06 18:58:23 [debug] 81#81: event timer del: -1: 294659
2020/11/06 18:58:23 [debug] 81#81: http file cache expire
2020/11/06 18:58:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:23 [debug] 81#81: shmtx lock
2020/11/06 18:58:23 [debug] 81#81: shmtx unlock
2020/11/06 18:58:23 [debug] 81#81: shmtx lock
2020/11/06 18:58:23 [debug] 81#81: shmtx unlock
2020/11/06 18:58:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:23 [debug] 81#81: event timer add: -1: 10000:304660
2020/11/06 18:58:23 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:33 [debug] 81#81: timer delta: 10001
2020/11/06 18:58:33 [debug] 81#81: event timer del: -1: 304660
2020/11/06 18:58:33 [debug] 81#81: http file cache expire
2020/11/06 18:58:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:33 [debug] 81#81: shmtx lock
2020/11/06 18:58:33 [debug] 81#81: shmtx unlock
2020/11/06 18:58:33 [debug] 81#81: shmtx lock
2020/11/06 18:58:33 [debug] 81#81: shmtx unlock
2020/11/06 18:58:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:33 [debug] 81#81: event timer add: -1: 10000:314662
2020/11/06 18:58:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:43 [debug] 81#81: timer delta: 10000
2020/11/06 18:58:43 [debug] 81#81: event timer del: -1: 314662
2020/11/06 18:58:43 [debug] 81#81: http file cache expire
2020/11/06 18:58:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:43 [debug] 81#81: shmtx lock
2020/11/06 18:58:43 [debug] 81#81: shmtx unlock
2020/11/06 18:58:43 [debug] 81#81: shmtx lock
2020/11/06 18:58:43 [debug] 81#81: shmtx unlock
2020/11/06 18:58:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:43 [debug] 81#81: event timer add: -1: 10000:324662
2020/11/06 18:58:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:58:53 [debug] 81#81: timer delta: 10008
2020/11/06 18:58:53 [debug] 81#81: event timer del: -1: 324662
2020/11/06 18:58:53 [debug] 81#81: http file cache expire
2020/11/06 18:58:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:58:53 [debug] 81#81: shmtx lock
2020/11/06 18:58:53 [debug] 81#81: shmtx unlock
2020/11/06 18:58:53 [debug] 81#81: shmtx lock
2020/11/06 18:58:53 [debug] 81#81: shmtx unlock
2020/11/06 18:58:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:58:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:58:53 [debug] 81#81: event timer add: -1: 10000:334671
2020/11/06 18:58:53 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:03 [debug] 81#81: timer delta: 10001
2020/11/06 18:59:03 [debug] 81#81: event timer del: -1: 334671
2020/11/06 18:59:03 [debug] 81#81: http file cache expire
2020/11/06 18:59:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:03 [debug] 81#81: shmtx lock
2020/11/06 18:59:03 [debug] 81#81: shmtx unlock
2020/11/06 18:59:03 [debug] 81#81: shmtx lock
2020/11/06 18:59:03 [debug] 81#81: shmtx unlock
2020/11/06 18:59:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:03 [debug] 81#81: event timer add: -1: 10000:344672
2020/11/06 18:59:03 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:13 [debug] 81#81: timer delta: 10002
2020/11/06 18:59:13 [debug] 81#81: event timer del: -1: 344672
2020/11/06 18:59:13 [debug] 81#81: http file cache expire
2020/11/06 18:59:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:13 [debug] 81#81: shmtx lock
2020/11/06 18:59:13 [debug] 81#81: shmtx unlock
2020/11/06 18:59:13 [debug] 81#81: shmtx lock
2020/11/06 18:59:13 [debug] 81#81: shmtx unlock
2020/11/06 18:59:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:13 [debug] 81#81: event timer add: -1: 10000:354674
2020/11/06 18:59:13 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:23 [debug] 81#81: timer delta: 10003
2020/11/06 18:59:23 [debug] 81#81: event timer del: -1: 354674
2020/11/06 18:59:23 [debug] 81#81: http file cache expire
2020/11/06 18:59:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:23 [debug] 81#81: shmtx lock
2020/11/06 18:59:23 [debug] 81#81: shmtx unlock
2020/11/06 18:59:23 [debug] 81#81: shmtx lock
2020/11/06 18:59:23 [debug] 81#81: shmtx unlock
2020/11/06 18:59:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:23 [debug] 81#81: event timer add: -1: 10000:364677
2020/11/06 18:59:23 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:33 [debug] 81#81: timer delta: 10002
2020/11/06 18:59:33 [debug] 81#81: event timer del: -1: 364677
2020/11/06 18:59:33 [debug] 81#81: http file cache expire
2020/11/06 18:59:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:33 [debug] 81#81: shmtx lock
2020/11/06 18:59:33 [debug] 81#81: shmtx unlock
2020/11/06 18:59:33 [debug] 81#81: shmtx lock
2020/11/06 18:59:33 [debug] 81#81: shmtx unlock
2020/11/06 18:59:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:33 [debug] 81#81: event timer add: -1: 10000:374679
2020/11/06 18:59:33 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:43 [debug] 81#81: timer delta: 10003
2020/11/06 18:59:43 [debug] 81#81: event timer del: -1: 374679
2020/11/06 18:59:43 [debug] 81#81: http file cache expire
2020/11/06 18:59:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:43 [debug] 81#81: shmtx lock
2020/11/06 18:59:43 [debug] 81#81: shmtx unlock
2020/11/06 18:59:43 [debug] 81#81: shmtx lock
2020/11/06 18:59:43 [debug] 81#81: shmtx unlock
2020/11/06 18:59:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:43 [debug] 81#81: event timer add: -1: 10000:384682
2020/11/06 18:59:43 [debug] 81#81: epoll timer: 10000
2020/11/06 18:59:53 [debug] 81#81: timer delta: 10000
2020/11/06 18:59:53 [debug] 81#81: event timer del: -1: 384682
2020/11/06 18:59:53 [debug] 81#81: http file cache expire
2020/11/06 18:59:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 18:59:53 [debug] 81#81: shmtx lock
2020/11/06 18:59:53 [debug] 81#81: shmtx unlock
2020/11/06 18:59:53 [debug] 81#81: shmtx lock
2020/11/06 18:59:53 [debug] 81#81: shmtx unlock
2020/11/06 18:59:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 18:59:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 18:59:53 [debug] 81#81: event timer add: -1: 10000:394682
2020/11/06 18:59:53 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:03 [debug] 81#81: timer delta: 10001
2020/11/06 19:00:03 [debug] 81#81: event timer del: -1: 394682
2020/11/06 19:00:03 [debug] 81#81: http file cache expire
2020/11/06 19:00:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:03 [debug] 81#81: shmtx lock
2020/11/06 19:00:03 [debug] 81#81: shmtx unlock
2020/11/06 19:00:03 [debug] 81#81: shmtx lock
2020/11/06 19:00:03 [debug] 81#81: shmtx unlock
2020/11/06 19:00:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:03 [debug] 81#81: event timer add: -1: 10000:404684
2020/11/06 19:00:03 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:13 [debug] 81#81: timer delta: 10002
2020/11/06 19:00:13 [debug] 81#81: event timer del: -1: 404684
2020/11/06 19:00:13 [debug] 81#81: http file cache expire
2020/11/06 19:00:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:13 [debug] 81#81: shmtx lock
2020/11/06 19:00:13 [debug] 81#81: shmtx unlock
2020/11/06 19:00:13 [debug] 81#81: shmtx lock
2020/11/06 19:00:13 [debug] 81#81: shmtx unlock
2020/11/06 19:00:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:13 [debug] 81#81: event timer add: -1: 10000:414686
2020/11/06 19:00:13 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:23 [debug] 81#81: timer delta: 10007
2020/11/06 19:00:23 [debug] 81#81: event timer del: -1: 414686
2020/11/06 19:00:23 [debug] 81#81: http file cache expire
2020/11/06 19:00:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:23 [debug] 81#81: shmtx lock
2020/11/06 19:00:23 [debug] 81#81: shmtx unlock
2020/11/06 19:00:23 [debug] 81#81: shmtx lock
2020/11/06 19:00:23 [debug] 81#81: shmtx unlock
2020/11/06 19:00:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:23 [debug] 81#81: event timer add: -1: 10000:424693
2020/11/06 19:00:23 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:33 [debug] 81#81: timer delta: 10002
2020/11/06 19:00:33 [debug] 81#81: event timer del: -1: 424693
2020/11/06 19:00:33 [debug] 81#81: http file cache expire
2020/11/06 19:00:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:33 [debug] 81#81: shmtx lock
2020/11/06 19:00:33 [debug] 81#81: shmtx unlock
2020/11/06 19:00:33 [debug] 81#81: shmtx lock
2020/11/06 19:00:33 [debug] 81#81: shmtx unlock
2020/11/06 19:00:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:33 [debug] 81#81: event timer add: -1: 10000:434695
2020/11/06 19:00:33 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:43 [debug] 81#81: timer delta: 10003
2020/11/06 19:00:43 [debug] 81#81: event timer del: -1: 434695
2020/11/06 19:00:43 [debug] 81#81: http file cache expire
2020/11/06 19:00:43 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:43 [debug] 81#81: shmtx lock
2020/11/06 19:00:43 [debug] 81#81: shmtx unlock
2020/11/06 19:00:43 [debug] 81#81: shmtx lock
2020/11/06 19:00:43 [debug] 81#81: shmtx unlock
2020/11/06 19:00:43 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:43 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:43 [debug] 81#81: event timer add: -1: 10000:444698
2020/11/06 19:00:43 [debug] 81#81: epoll timer: 10000
2020/11/06 19:00:53 [debug] 81#81: timer delta: 10009
2020/11/06 19:00:53 [debug] 81#81: event timer del: -1: 444698
2020/11/06 19:00:53 [debug] 81#81: http file cache expire
2020/11/06 19:00:53 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:00:53 [debug] 81#81: shmtx lock
2020/11/06 19:00:53 [debug] 81#81: shmtx unlock
2020/11/06 19:00:53 [debug] 81#81: shmtx lock
2020/11/06 19:00:53 [debug] 81#81: shmtx unlock
2020/11/06 19:00:53 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:00:53 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:00:53 [debug] 81#81: event timer add: -1: 10000:454707
2020/11/06 19:00:53 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:03 [debug] 81#81: timer delta: 10002
2020/11/06 19:01:03 [debug] 81#81: event timer del: -1: 454707
2020/11/06 19:01:03 [debug] 81#81: http file cache expire
2020/11/06 19:01:03 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:03 [debug] 81#81: shmtx lock
2020/11/06 19:01:03 [debug] 81#81: shmtx unlock
2020/11/06 19:01:03 [debug] 81#81: shmtx lock
2020/11/06 19:01:03 [debug] 81#81: shmtx unlock
2020/11/06 19:01:03 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:03 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:03 [debug] 81#81: event timer add: -1: 10000:464709
2020/11/06 19:01:03 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:13 [debug] 81#81: timer delta: 10000
2020/11/06 19:01:13 [debug] 81#81: event timer del: -1: 464709
2020/11/06 19:01:13 [debug] 81#81: http file cache expire
2020/11/06 19:01:13 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:13 [debug] 81#81: shmtx lock
2020/11/06 19:01:13 [debug] 81#81: shmtx unlock
2020/11/06 19:01:13 [debug] 81#81: shmtx lock
2020/11/06 19:01:13 [debug] 81#81: shmtx unlock
2020/11/06 19:01:13 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:13 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:13 [debug] 81#81: event timer add: -1: 10000:474709
2020/11/06 19:01:13 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:23 [debug] 81#81: timer delta: 10001
2020/11/06 19:01:23 [debug] 81#81: event timer del: -1: 474709
2020/11/06 19:01:23 [debug] 81#81: http file cache expire
2020/11/06 19:01:23 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:23 [debug] 81#81: shmtx lock
2020/11/06 19:01:23 [debug] 81#81: shmtx unlock
2020/11/06 19:01:23 [debug] 81#81: shmtx lock
2020/11/06 19:01:23 [debug] 81#81: shmtx unlock
2020/11/06 19:01:23 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:23 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:23 [debug] 81#81: event timer add: -1: 10000:484710
2020/11/06 19:01:23 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:33 [debug] 81#81: timer delta: 10001
2020/11/06 19:01:33 [debug] 81#81: event timer del: -1: 484710
2020/11/06 19:01:33 [debug] 81#81: http file cache expire
2020/11/06 19:01:33 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:33 [debug] 81#81: shmtx lock
2020/11/06 19:01:33 [debug] 81#81: shmtx unlock
2020/11/06 19:01:33 [debug] 81#81: shmtx lock
2020/11/06 19:01:33 [debug] 81#81: shmtx unlock
2020/11/06 19:01:33 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:33 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:33 [debug] 81#81: event timer add: -1: 10000:494711
2020/11/06 19:01:33 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:42 [debug] 81#81: timer delta: 10001
2020/11/06 19:01:42 [debug] 81#81: event timer del: -1: 494711
2020/11/06 19:01:42 [debug] 81#81: http file cache expire
2020/11/06 19:01:42 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:42 [debug] 81#81: shmtx lock
2020/11/06 19:01:42 [debug] 81#81: shmtx unlock
2020/11/06 19:01:42 [debug] 81#81: shmtx lock
2020/11/06 19:01:42 [debug] 81#81: shmtx unlock
2020/11/06 19:01:42 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:42 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:42 [debug] 81#81: event timer add: -1: 10000:504712
2020/11/06 19:01:42 [debug] 81#81: epoll timer: 10000
2020/11/06 19:01:52 [debug] 81#81: timer delta: 9999
2020/11/06 19:01:52 [debug] 81#81: epoll timer: 1
2020/11/06 19:01:52 [debug] 81#81: timer delta: 2
2020/11/06 19:01:52 [debug] 81#81: event timer del: -1: 504712
2020/11/06 19:01:52 [debug] 81#81: http file cache expire
2020/11/06 19:01:52 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:01:52 [debug] 81#81: shmtx lock
2020/11/06 19:01:52 [debug] 81#81: shmtx unlock
2020/11/06 19:01:52 [debug] 81#81: shmtx lock
2020/11/06 19:01:52 [debug] 81#81: shmtx unlock
2020/11/06 19:01:52 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:01:52 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:01:52 [debug] 81#81: event timer add: -1: 10000:514713
2020/11/06 19:01:52 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:02 [debug] 81#81: timer delta: 10001
2020/11/06 19:02:02 [debug] 81#81: event timer del: -1: 514713
2020/11/06 19:02:02 [debug] 81#81: http file cache expire
2020/11/06 19:02:02 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:02 [debug] 81#81: shmtx lock
2020/11/06 19:02:02 [debug] 81#81: shmtx unlock
2020/11/06 19:02:02 [debug] 81#81: shmtx lock
2020/11/06 19:02:02 [debug] 81#81: shmtx unlock
2020/11/06 19:02:02 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:02 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:02 [debug] 81#81: event timer add: -1: 10000:524714
2020/11/06 19:02:02 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:12 [debug] 81#81: timer delta: 10005
2020/11/06 19:02:12 [debug] 81#81: event timer del: -1: 524714
2020/11/06 19:02:12 [debug] 81#81: http file cache expire
2020/11/06 19:02:12 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:12 [debug] 81#81: shmtx lock
2020/11/06 19:02:12 [debug] 81#81: shmtx unlock
2020/11/06 19:02:12 [debug] 81#81: shmtx lock
2020/11/06 19:02:12 [debug] 81#81: shmtx unlock
2020/11/06 19:02:12 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:12 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:12 [debug] 81#81: event timer add: -1: 10000:534719
2020/11/06 19:02:12 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:22 [debug] 81#81: timer delta: 10002
2020/11/06 19:02:22 [debug] 81#81: event timer del: -1: 534719
2020/11/06 19:02:22 [debug] 81#81: http file cache expire
2020/11/06 19:02:22 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:22 [debug] 81#81: shmtx lock
2020/11/06 19:02:22 [debug] 81#81: shmtx unlock
2020/11/06 19:02:22 [debug] 81#81: shmtx lock
2020/11/06 19:02:22 [debug] 81#81: shmtx unlock
2020/11/06 19:02:22 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:22 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:22 [debug] 81#81: event timer add: -1: 10000:544722
2020/11/06 19:02:22 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:32 [debug] 81#81: timer delta: 10002
2020/11/06 19:02:32 [debug] 81#81: event timer del: -1: 544722
2020/11/06 19:02:32 [debug] 81#81: http file cache expire
2020/11/06 19:02:32 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:32 [debug] 81#81: shmtx lock
2020/11/06 19:02:32 [debug] 81#81: shmtx unlock
2020/11/06 19:02:32 [debug] 81#81: shmtx lock
2020/11/06 19:02:32 [debug] 81#81: shmtx unlock
2020/11/06 19:02:32 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:32 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:32 [debug] 81#81: event timer add: -1: 10000:554724
2020/11/06 19:02:32 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:42 [debug] 81#81: timer delta: 10002
2020/11/06 19:02:42 [debug] 81#81: event timer del: -1: 554724
2020/11/06 19:02:42 [debug] 81#81: http file cache expire
2020/11/06 19:02:42 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:42 [debug] 81#81: shmtx lock
2020/11/06 19:02:42 [debug] 81#81: shmtx unlock
2020/11/06 19:02:42 [debug] 81#81: shmtx lock
2020/11/06 19:02:42 [debug] 81#81: shmtx unlock
2020/11/06 19:02:42 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:42 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:42 [debug] 81#81: event timer add: -1: 10000:564726
2020/11/06 19:02:42 [debug] 81#81: epoll timer: 10000
2020/11/06 19:02:52 [debug] 81#81: timer delta: 10000
2020/11/06 19:02:52 [debug] 81#81: event timer del: -1: 564726
2020/11/06 19:02:52 [debug] 81#81: http file cache expire
2020/11/06 19:02:52 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:02:52 [debug] 81#81: shmtx lock
2020/11/06 19:02:52 [debug] 81#81: shmtx unlock
2020/11/06 19:02:52 [debug] 81#81: shmtx lock
2020/11/06 19:02:52 [debug] 81#81: shmtx unlock
2020/11/06 19:02:52 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:02:52 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:02:52 [debug] 81#81: event timer add: -1: 10000:574726
2020/11/06 19:02:52 [debug] 81#81: epoll timer: 10000
2020/11/06 19:03:02 [debug] 81#81: timer delta: 10003
2020/11/06 19:03:02 [debug] 81#81: event timer del: -1: 574726
2020/11/06 19:03:02 [debug] 81#81: http file cache expire
2020/11/06 19:03:02 [debug] 81#81: malloc: 0000562A14D4EBA0:59
2020/11/06 19:03:02 [debug] 81#81: shmtx lock
2020/11/06 19:03:02 [debug] 81#81: shmtx unlock
2020/11/06 19:03:02 [debug] 81#81: shmtx lock
2020/11/06 19:03:02 [debug] 81#81: shmtx unlock
2020/11/06 19:03:02 [debug] 81#81: http file cache size: 0 c:0 w:-1
2020/11/06 19:03:02 [debug] 81#81: http file cache manager: 0 e:0 n:10000
2020/11/06 19:03:02 [debug] 81#81: event timer add: -1: 10000:584729
2020/11/06 19:03:02 [debug] 81#81: epoll timer: 10000

and I do:

❯ docker pull rabbitmq
Using default tag: latest
latest: Pulling from library/rabbitmq
171857c49d0f: Pull complete
419640447d26: Pull complete
61e52f862619: Pull complete
856781f94405: Pull complete
fd5f3d3bac09: Pull complete
e526190d8f2c: Pull complete
bcaa754c1ece: Pull complete
41118e0c01b4: Pull complete
ac3f2ab39238: Pull complete
cd9ffc55132f: Pull complete
Digest: sha256:f2e00e45bf9c9456618486f54dc4906eedde3cf1a2824a3bc9a3193da55f323a
Status: Downloaded newer image for rabbitmq:latest
docker.io/library/rabbitmq:latest
❯ docker pull rabbitmq
Using default tag: latest
latest: Pulling from library/rabbitmq
Digest: sha256:f2e00e45bf9c9456618486f54dc4906eedde3cf1a2824a3bc9a3193da55f323a
Status: Image is up to date for rabbitmq:latest
docker.io/library/rabbitmq:latest

but when I check the container, nothing is cached:

❯ docker ps
CONTAINER ID   IMAGE                                  COMMAND            CREATED         STATUS         PORTS                                           NAMES
3866c17ffc6a   rpardini/docker-registry-proxy:0.6.0   "/entrypoint.sh"   4 minutes ago   Up 4 minutes   80/tcp, 8081-8082/tcp, 0.0.0.0:3128->3128/tcp   docker_registry_proxy
❯ docker exec -it 3866c17ffc6a sh
/ #
/ #
/ # ls
bin  certs		dev		     entrypoint.sh  home  media  opt   root  sbin  sys	usr
ca   create_ca_cert.sh	docker_mirror_cache  etc	    lib   mnt	 proc  run   srv   tmp	var
/ # cd docker_mirror_cache
/docker_mirror_cache # ls
/docker_mirror_cache # ls

also there total usage of disk is the same:

❯ docker exec -it 703dc421b8eb sh
/ # du -h --max-depth=1 /
108K	/var
8.0K	/run
0	/dev
68K	/root
0	/sys
4.1M	/lib
896K	/etc
248K	/sbin
4.0K	/tmp
4.0K	/home
4.0K	/srv
4.0K	/opt
16K	/media
du: cannot read directory '/proc/73/map_files': Permission denied
du: cannot read directory '/proc/74/map_files': Permission denied
du: cannot read directory '/proc/75/map_files': Permission denied
du: cannot read directory '/proc/76/map_files': Permission denied
du: cannot read directory '/proc/77/map_files': Permission denied
du: cannot read directory '/proc/78/map_files': Permission denied
du: cannot read directory '/proc/79/map_files': Permission denied
du: cannot read directory '/proc/80/map_files': Permission denied
du: cannot read directory '/proc/81/map_files': Permission denied
du: cannot access '/proc/96/task/96/fd/4': No such file or directory
du: cannot access '/proc/96/task/96/fdinfo/4': No such file or directory
du: cannot access '/proc/96/fd/3': No such file or directory
du: cannot access '/proc/96/fdinfo/3': No such file or directory
0	/proc
4.0K	/mnt
121M	/usr
1.6M	/bin
52K	/certs
0	/docker_mirror_cache
12K	/ca
128M	/
/ #

Am i missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.