Comments (26)
Please feel free to reopen the ticket if you have further concerns.
from kong.
I run traditional mode here so no DP/CP setup at all.
That's strange. The log states that communicate()
is being called, which should only happen in DPs.
Could you share the configuration (kong.conf, environment variables) that you are using?
from kong.
@jeremyjpj0916 OK I just figured out that worker.lua
and communicate()
are from a dependence of Kong so it's totally normal.
I guess the reason is that when loading large DB/rebuilding routers it consumes too much resources, especially CPU, so that it times out sockets.
@chronolaw Could you take a look?
from kong.
Maybe this was plays a role?
- name: KONG_LUA_SOCKET_POOL_SIZE
value: "30"
I noticed default is 256 w Kong. Not sure why I had it so low but just bumped that to 256 to match default.
from kong.
Looks like that solves your issue. The log says that DP fails to connect to CP and what you found could be the cause.
from kong.
Not confirmed yet but will let yall know. testing out that change now.
I run traditional mode here so no DP/CP setup at all.
from kong.
@StarlightIbuki nope it still an issue even bumping that to 256:
2024-06-20T23:44:08.208226437-05:00 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:10
2024-06-20T23:44:08.208894511-05:00 nginx: [warn] load balancing method redefined in /usr/local/kong/nginx.conf:129
2024-06-20T23:44:10.571824051-05:00 2024/06/21 04:44:08 [notice] 30#0: [lua] init.lua:788: init(): [request-debug] token for request debugging: bf683cd8-4e7e-4f85-bea2-ccc53306be42
2024-06-20T23:44:10.719580663-05:00 2024/06/21 04:44:08 [notice] 30#0: using the "epoll" event method
2024-06-20T23:44:10.719615449-05:00 2024/06/21 04:44:08 [notice] 30#0: openresty/1.25.3.1
2024-06-20T23:44:10.719618512-05:00 2024/06/21 04:44:08 [notice] 30#0: OS: Linux 4.18.0-513.18.1.el8_9.x86_64
2024-06-20T23:44:10.719620827-05:00 2024/06/21 04:44:08 [notice] 30#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024-06-20T23:44:10.719649294-05:00 2024/06/21 04:44:08 [notice] 30#0: start worker processes
2024-06-20T23:44:10.724480258-05:00 2024/06/21 04:44:08 [notice] 30#0: start worker process 48
2024-06-20T23:44:10.726169587-05:00 2024/06/21 04:44:08 [notice] 30#0: start worker process 49
2024-06-20T23:44:10.727854741-05:00 2024/06/21 04:44:08 [notice] 30#0: start worker process 50
2024-06-20T23:44:10.729952978-05:00 2024/06/21 04:44:08 [notice] 30#0: start worker process 51
2024-06-20T23:44:10.814475714-05:00 2024/06/21 04:44:10 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by_lua*
2024-06-20T23:44:11.120375447-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'services' into the core_cache (in 305ms), context: init_worker_by_lua*
2024-06-20T23:44:11.120393391-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'consumers' into the cache..., context: init_worker_by_lua*
2024-06-20T23:44:11.178344739-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'consumers' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-20T23:44:11.178359271-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'acls' into the cache..., context: init_worker_by_lua*
2024-06-20T23:44:15.980237439-05:00 2024/06/21 04:44:15 [error] 49#0: *6 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:15.982404855-05:00 2024/06/21 04:44:15 [error] 51#0: *10 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:16.008796729-05:00 2024/06/21 04:44:16 [error] 50#0: *1301 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:21.014687655-05:00 2024/06/21 04:44:21 [error] 51#0: *1964 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:21.029456935-05:00 2024/06/21 04:44:21 [error] 49#0: *1966 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:21.043850043-05:00 2024/06/21 04:44:21 [error] 50#0: *1968 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:26.050057206-05:00 2024/06/21 04:44:26 [error] 51#0: *1970 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:26.059817019-05:00 2024/06/21 04:44:26 [error] 50#0: *1972 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:26.066641984-05:00 2024/06/21 04:44:26 [error] 49#0: *1974 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:31.060790842-05:00 2024/06/21 04:44:31 [error] 51#0: *1976 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:31.082047668-05:00 2024/06/21 04:44:31 [error] 49#0: *1978 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:31.105887063-05:00 2024/06/21 04:44:31 [error] 50#0: *1980 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:36.100812513-05:00 2024/06/21 04:44:36 [error] 49#0: *1982 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:36.108819551-05:00 2024/06/21 04:44:36 [error] 51#0: *1984 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:36.141029361-05:00 2024/06/21 04:44:36 [error] 50#0: *1986 [lua] worker.lua:138: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-20T23:44:38.223902407-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'acls' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-20T23:44:38.223919390-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'certificates' into the core_cache..., context: init_worker_by_lua*
2024-06-20T23:44:38.247494398-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'certificates' into the core_cache (in 0ms), context: init_worker_by_lua*
2024-06-20T23:44:38.247507871-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'jwt_secrets' into the cache..., context: init_worker_by_lua*
2024-06-20T23:44:38.322106912-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'jwt_secrets' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-20T23:44:38.322147328-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'oauth2_credentials' into the cache..., context: init_worker_by_lua*
2024-06-20T23:44:38.407124211-05:00 2024/06/21 04:44:11 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'oauth2_credentials' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-20T23:44:38.786897149-05:00 2024/06/21 04:44:38 [notice] 48#0: *2015 [lua] warmup.lua:47: warming up DNS entries ..., context: ngx.timer
2024-06-20T23:44:38.795954003-05:00 2024/06/21 04:44:38 [error] 48#0: *1994 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.795965848-05:00 2024/06/21 04:44:38 [error] 48#0: *1995 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.795971573-05:00 2024/06/21 04:44:38 [error] 48#0: *1996 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.795979052-05:00 2024/06/21 04:44:38 [error] 48#0: *1997 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.795987110-05:00 2024/06/21 04:44:38 [error] 48#0: *1998 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796018705-05:00 2024/06/21 04:44:38 [error] 48#0: *1999 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796038641-05:00 2024/06/21 04:44:38 [error] 48#0: *2000 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796065213-05:00 2024/06/21 04:44:38 [error] 48#0: *2001 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796087498-05:00 2024/06/21 04:44:38 [error] 48#0: *2002 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796107384-05:00 2024/06/21 04:44:38 [error] 48#0: *2003 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796131415-05:00 2024/06/21 04:44:38 [error] 48#0: *2004 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796193402-05:00 2024/06/21 04:44:38 [error] 48#0: *2005 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796240682-05:00 2024/06/21 04:44:38 [error] 48#0: *2006 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796252091-05:00 2024/06/21 04:44:38 [error] 48#0: *2007 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-20T23:44:38.796287201-05:00 2024/06/21 04:44:38 [error] 48#0: *2008 [lua] broker.lua:111: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
Happy to show any kong engineers the issue on a call if they were curious, it must be somewhat related to that new events server block yall do w the kong_worker_events
stuff.
from kong.
Has to be some kinda resource scale issue or warmup problem in my env paired with newer kong version causing the shakey startup. Do yall have any OSS or enterprise customers running 3.x with 11,000+ proxies etc. ? Am thinking they may see the same thing.
from kong.
@StarlightIbuki can you re-open this? Vs me making a brand new issue that will just clone the deets I have given here.
Edit - Remade it here since was not reopened - #13274
from kong.
Other ideas I may have that might cause Kong to do this would be what happens if an entity tries to PUT against the admin API while Kong is still trying to startup or things like that may be impacting that output? I have a healthcheck thing that I use that tries to constantly PUTs a dummy unused upstreams resource to help me check for correct db write access. Its the only resource in that table too we don't use upstreams regularly.
from kong.
I don't override the default conf file at all but we do use ENV variables to dictate our runtime as well as a custom NGINX template.
Current ENV Variables in my K8S Deployment(XXXX's for block outs):
- env:
- name: KONG_NGINX_MAIN_ENV
value: "SPLUNK_HOST; env KONG_SSL_CERT_KEY; env KONG_SSL_CERT_DER"
- name: KONG_ADMIN_GUI_LISTEN
value: "0.0.0.0:8002, 0.0.0.0:8445"
- name: KONG_NGINX_HTTP_KEEPALIVE_REQUESTS
value: "50000"
- name: KONG_NGINX_HTTP_KEEPALIVE_TIMEOUT
value: "120s"
- name: KONG_NGINX_HTTP_CLIENT_MAX_BODY_SIZE
value: "50m"
- name: KONG_NGINX_HTTP_CLIENT_BODY_BUFFER_SIZE
value: "50m"
- name: KONG_NGINX_HTTP_LARGE_CLIENT_HEADER_BUFFERS
value: "2 20k"
- name: KONG_NGINX_HTTP_TCP_NODELAY
value: "on"
- name: KONG_NGINX_HTTP_TCP_NOPUSH
value: "on"
- name: KONG_NGINX_HTTP_PROXY_BUFFER_SIZE
value: "32k"
- name: KONG_NGINX_HTTP_PROXY_BUFFERS
value: "10 2m"
- name: KONG_NGINX_HTTP_PROXY_BUSY_BUFFERS_SIZE
value: "4m"
- name: KONG_NGINX_HTTP_RESET_TIMEDOUT_CONNECTION
value: "on"
- name: KONG_NGINX_HTTP_HTTP2_RECV_BUFFER_SIZE
value: "20m"
- name: KONG_NGINX_HTTP_HTTP2_BODY_PREREAD_SIZE
value: "2m"
- name: KONG_NGINX_UPSTREAM_KEEPALIVE
value: "90"
- name: KONG_NGINX_UPSTREAM_KEEPALIVE_REQUESTS
value: "50000"
- name: KONG_NGINX_UPSTREAM_KEEPALIVE_TIMEOUT
value: "120"
- name: KONG_UPSTREAM_KEEPALIVE_POOL_SIZE
value: "1024"
- name: KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS
value: "50000"
- name: KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT
value: "120"
- name: KONG_UNTRUSTED_LUA
value: "on"
- name: CRL_DOWNLOAD_URL
value: XXXXXXXXX
- name: KONG_NGINX_MAIN_WORKER_PROCESSES
value: "3"
- name: KONG_LOG_LEVEL
value: notice
- name: KONG_PROXY_ACCESS_LOG
value: "off"
- name: KONG_ADMIN_ACCESS_LOG
value: "off"
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ANONYMOUS_REPORTS
value: "off"
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2 deferred reuseport backlog=16384
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001 deferred reuseport backlog=16384
- name: KONG_MEM_CACHE_SIZE
value: 1024m
- name: KONG_SSL_CERT
value: /usr/local/kong/ssl/kongcert.crt
- name: KONG_SSL_CERT_DER
value: /usr/local/kong/ssl/kongcertder.der
- name: KONG_SSL_CERT_KEY
value: /usr/local/kong/ssl/kongprivatekey.key
- name: KONG_CLIENT_SSL
value: "off"
- name: KONG_ADMIN_SSL_ENABLED
value: "off"
- name: KONG_HEADERS
value: latency_tokens
- name: KONG_CLIENT_MAX_BODY_SIZE
value: 50m
- name: KONG_CLIENT_BODY_BUFFER_SIZE
value: 50m
- name: KONG_ERROR_DEFAULT_TYPE
value: text/plain
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_MAX_CONCURRENT_QUERIES
value: "33"
- name: KONG_PG_SSL
value: "on"
- name: KONG_PG_SSL_REQUIRED
value: "on"
- name: KONG_PG_SSL_VERIFY
value: "on"
- name: KONG_PG_TLS_VERSION
value: "tlsv1_2"
- name: KONG_PG_HOST
value: XXXXXXXX
- name: KONG_PG_PORT
value: "5432"
- name: KONG_PG_USER
valueFrom:
secretKeyRef:
key: username
name: postgres-secret
- name: KONG_PG_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: postgres-secret
- name: KONG_PG_TIMEOUT
value: "8000"
- name: KONG_PG_DATABASE
value: XXXXXX
- name: KONG_PG_RO_SSL
value: "on"
- name: KONG_PG_RO_SSL_REQUIRED
value: "on"
- name: KONG_PG_RO_SSL_VERIFY
value: "on"
- name: KONG_PG_RO_TLS_VERSION
value: "tlsv1_2"
- name: KONG_PG_RO_HOST
value: XXXXXXXX
- name: KONG_PG_RO_PORT
value: "5432"
- name: KONG_PG_RO_USER
valueFrom:
secretKeyRef:
key: username
name: postgres-secret
- name: KONG_PG_RO_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: postgres-secret
- name: KONG_PG_RO_TIMEOUT
value: "8000"
- name: KONG_PG_RO_DATABASE
value: XXXXX
- name: KONG_DB_UPDATE_FREQUENCY
value: "5"
- name: KONG_DB_UPDATE_PROPAGATION
value: "0"
- name: KONG_DB_CACHE_TTL
value: "0"
- name: KONG_DNS_HOSTSFILE
value: /etc/hosts
- name: KONG_DNS_STALE_TTL
value: "4"
- name: KONG_DNS_NOT_FOUND_TTL
value: "10"
- name: KONG_DNS_RESOLVER
value: XXXXXX,XXXXXXXX
- name: KONG_DNS_ERROR_TTL
value: "1"
- name: KONG_DNS_NO_SYNC
value: "off"
- name: KONG_LUA_SSL_TRUSTED_CERTIFICATE
value: /usr/local/kong/ssl/kongcert.pem
- name: KONG_LUA_SSL_VERIFY_DEPTH
value: "3"
- name: KONG_LUA_SOCKET_POOL_SIZE
value: "256"
- name: SPLUNK_HOST
value: XXXXXXXXXXXXXXX
- name: LOCALDOMAIN
value: .
- name: RES_OPTIONS
value: ndots:1
- name: KONG_DNS_ORDER
value: LAST,A,SRV,CNAME
- name: KONG_PLUGINS
value: bundled,nonamesecurity,kong-plugin-oauth-proxy,kong-kafka-log,stargate-waf-error-log,mtls,kong-tx-debugger,kong-error-log,kong-oidc-implicit-token,kong-response-size-limiting,kong-service-virtualization,kong-cluster-drain,kong-upstream-jwt,kong-splunk-log,kong-spec-expose,kong-path-based-routing,kong-oidc-multi-idp
- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_NGINX_PROXY_REAL_IP_HEADER
value: X-Forwarded-For
- name: KONG_NGINX_PROXY_REAL_IP_RECURSIVE
value: "on"
- name: KONG_WORKER_CONSISTENCY
value: eventual
- name: KONG_WORKER_STATE_UPDATE_FREQUENCY
value: "5"
- name: KONG_DB_CACHE_WARMUP_ENTITIES
value: services,consumers,acls,certificates,jwt_secrets,oauth2_credentials
- name: LUA_PATH
value: /usr/local/kong/luarocks/share/lua/5.1/?.lua;;/usr/local/kong/luarocks/share/lua/5.1/?/init.lua;
- name: KONG_NGINX_HTTP_SSL_PROTOCOLS
value: TLSv1.2 TLSv1.3
- name: KONG_SSL_CIPHER_SUITE
value: intermediate
from kong.
@StarlightIbuki And current nginx template file:
pid pids/nginx.pid;
> if wasm and wasm_dynamic_module then
load_module $(wasm_dynamic_module);
> end
error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
> if database == "off" then
lmdb_environment_path ${{LMDB_ENVIRONMENT_PATH}};
lmdb_map_size ${{LMDB_MAP_SIZE}};
> if lmdb_validation_tag then
lmdb_validation_tag $(lmdb_validation_tag);
> end
> end
# injected nginx_main_* directives
> for _, el in ipairs(nginx_main_directives) do
$(el.name) $(el.value);
> end
events {
# injected nginx_events_* directives
> for _, el in ipairs(nginx_events_directives) do
$(el.name) $(el.value);
> end
}
> if wasm then
wasm {
> for _, el in ipairs(nginx_wasm_main_shm_kv_directives) do
shm_kv $(el.name) $(el.value);
> end
> for _, module in ipairs(wasm_modules_parsed) do
module $(module.name) $(module.path);
> end
> for _, el in ipairs(nginx_wasm_main_directives) do
> if el.name == "shm_kv" then
shm_kv * $(el.value);
> else
$(el.name) $(el.value);
> end
> end
> if #nginx_wasm_wasmtime_directives > 0 then
wasmtime {
> for _, el in ipairs(nginx_wasm_wasmtime_directives) do
flag $(el.name) $(el.value);
> end
}
> end -- wasmtime
> if #nginx_wasm_v8_directives > 0 then
v8 {
> for _, el in ipairs(nginx_wasm_v8_directives) do
flag $(el.name) $(el.value);
> end
}
> end -- v8
> if #nginx_wasm_wasmer_directives > 0 then
wasmer {
> for _, el in ipairs(nginx_wasm_wasmer_directives) do
flag $(el.name) $(el.value);
> end
}
> end -- wasmer
}
> end
http {
server_tokens off;
error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
uninitialized_variable_warn off;
# enable capturing error logs
lua_capture_error_log 100k;
# exclusive oauth proxy shm caches for consumers and their cached token
lua_shared_dict kong_oauth_cache2 5m;
lua_shared_dict kong_oauth_cache_miss2 2m;
lua_shared_dict kong_oauth_cache_locks2 1m;
# exclusive oauth proxy token shm caches
lua_shared_dict kong_oauth_token_cache2 5m;
lua_shared_dict kong_oauth_token_cache_miss2 2m;
lua_shared_dict kong_oauth_token_cache_locks2 1m;
# exclusive oidc shm caches
lua_shared_dict kong_oidc_cache 10m;
lua_shared_dict kong_oidc_cache_miss 6m;
lua_shared_dict kong_oidc_cache_locks 3m;
lua_package_path '${{LUA_PACKAGE_PATH}};;';
lua_package_cpath '${{LUA_PACKAGE_CPATH}};;';
lua_socket_pool_size ${{LUA_SOCKET_POOL_SIZE}};
lua_socket_log_errors off;
lua_max_running_timers 4096;
lua_max_pending_timers 16384;
lua_ssl_verify_depth ${{LUA_SSL_VERIFY_DEPTH}};
> if lua_ssl_trusted_certificate_combined then
lua_ssl_trusted_certificate '${{LUA_SSL_TRUSTED_CERTIFICATE_COMBINED}}';
> end
lua_ssl_protocols ${{NGINX_HTTP_LUA_SSL_PROTOCOLS}};
lua_shared_dict kong 5m;
lua_shared_dict kong_locks 8m;
lua_shared_dict kong_healthchecks 5m;
lua_shared_dict kong_cluster_events 5m;
lua_shared_dict kong_rate_limiting_counters 12m;
lua_shared_dict kong_core_db_cache ${{MEM_CACHE_SIZE}};
lua_shared_dict kong_core_db_cache_miss 12m;
lua_shared_dict kong_db_cache ${{MEM_CACHE_SIZE}};
lua_shared_dict kong_db_cache_miss 12m;
lua_shared_dict kong_secrets 5m;
underscores_in_headers on;
> if ssl_cipher_suite == 'old' then
lua_ssl_conf_command CipherString DEFAULT:@SECLEVEL=0;
proxy_ssl_conf_command CipherString DEFAULT:@SECLEVEL=0;
ssl_conf_command CipherString DEFAULT:@SECLEVEL=0;
grpc_ssl_conf_command CipherString DEFAULT:@SECLEVEL=0;
> end
> if ssl_ciphers then
ssl_ciphers ${{SSL_CIPHERS}};
> end
# injected nginx_http_* directives
> for _, el in ipairs(nginx_http_directives) do
$(el.name) $(el.value);
> end
init_by_lua_block {
> if test and coverage then
require 'luacov'
jit.off()
> end -- test and coverage
Kong = require 'kong'
Kong.init()
}
init_worker_by_lua_block {
Kong.init_worker()
}
exit_worker_by_lua_block {
Kong.exit_worker()
}
> if (role == "traditional" or role == "data_plane") and #proxy_listeners > 0 then
log_format kong_log_format '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'kong_request_id: "$kong_request_id"';
# Load variable indexes
lua_kong_load_var_index default;
upstream kong_upstream {
server 0.0.0.1;
# injected nginx_upstream_* directives
> for _, el in ipairs(nginx_upstream_directives) do
$(el.name) $(el.value);
> end
balancer_by_lua_block {
Kong.balancer()
}
}
server {
server_name kong;
> for _, entry in ipairs(proxy_listeners) do
listen $(entry.listener);
> end
> for _, entry in ipairs(proxy_listeners) do
> if entry.http2 then
http2 on;
> break
> end
> end
error_page 400 404 405 408 411 412 413 414 417 /kong_error_handler;
error_page 494 =494 /kong_error_handler;
error_page 500 502 503 504 /kong_error_handler;
# Append the kong request id to the error log
# https://github.com/Kong/lua-kong-nginx-module#lua_kong_error_log_request_id
lua_kong_error_log_request_id $kong_request_id;
> if proxy_access_log_enabled then
> if custom_proxy_access_log then
access_log ${{PROXY_ACCESS_LOG}};
> else
access_log ${{PROXY_ACCESS_LOG}} kong_log_format;
> end
> else
access_log off;
> end
error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
> if proxy_ssl_enabled then
> for i = 1, #ssl_cert do
ssl_certificate $(ssl_cert[i]);
ssl_certificate_key $(ssl_cert_key[i]);
> end
ssl_session_cache shared:SSL:${{SSL_SESSION_CACHE_SIZE}};
ssl_certificate_by_lua_block {
Kong.ssl_certificate()
}
> end
# injected nginx_proxy_* directives
> for _, el in ipairs(nginx_proxy_directives) do
$(el.name) $(el.value);
> end
> for _, ip in ipairs(trusted_ips) do
set_real_ip_from $(ip);
> end
rewrite_by_lua_block {
Kong.rewrite()
}
access_by_lua_block {
Kong.access()
}
header_filter_by_lua_block {
Kong.header_filter()
ngx.header["Server"] = nil
ngx.header["Via"] = nil
}
body_filter_by_lua_block {
Kong.body_filter()
}
log_by_lua_block {
Kong.log()
}
location / {
default_type '';
set $ctx_ref '';
set $upstream_te '';
set $upstream_host '';
set $upstream_upgrade '';
set $upstream_connection '';
set $upstream_scheme '';
set $upstream_uri '';
set $upstream_x_forwarded_for '';
set $upstream_x_forwarded_proto '';
set $upstream_x_forwarded_host '';
set $upstream_x_forwarded_port '';
set $upstream_x_forwarded_path '';
set $upstream_x_forwarded_prefix '';
set $kong_proxy_mode 'http';
proxy_http_version 1.1;
proxy_buffering on;
proxy_request_buffering on;
# injected nginx_location_* directives
> for _, el in ipairs(nginx_location_directives) do
$(el.name) $(el.value);
> end
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered_request {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering on;
proxy_request_buffering off;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered_response {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering on;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @grpc {
internal;
default_type '';
set $kong_proxy_mode 'grpc';
grpc_set_header TE $upstream_te;
grpc_set_header X-Forwarded-For $upstream_x_forwarded_for;
grpc_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
grpc_set_header X-Forwarded-Host $upstream_x_forwarded_host;
grpc_set_header X-Forwarded-Port $upstream_x_forwarded_port;
grpc_set_header X-Forwarded-Path $upstream_x_forwarded_path;
grpc_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
grpc_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
grpc_set_header X-Kong-Request-Id $kong_request_id;
> end
grpc_pass_header Server;
grpc_pass_header Date;
grpc_ssl_name $upstream_host;
grpc_ssl_server_name on;
> if client_ssl then
grpc_ssl_certificate ${{CLIENT_SSL_CERT}};
grpc_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
grpc_pass $upstream_scheme://kong_upstream;
}
location = /kong_buffered_http {
internal;
default_type '';
set $kong_proxy_mode 'http';
rewrite_by_lua_block {
-- ngx.localtion.capture will create a new nginx request,
-- so the upstream ssl-related info attached to the `r` gets lost.
-- we need to re-set them here to the new nginx request.
local ctx = ngx.ctx
local upstream_ssl = require("kong.runloop.upstream_ssl")
upstream_ssl.set_service_ssl(ctx)
upstream_ssl.fallback_upstream_client_cert(ctx)
}
access_by_lua_block {;}
header_filter_by_lua_block {;}
body_filter_by_lua_block {;}
log_by_lua_block {;}
proxy_http_version 1.1;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location = /kong_error_handler {
internal;
default_type '';
rewrite_by_lua_block {;}
access_by_lua_block {;}
content_by_lua_block {
Kong.handle_error()
}
}
}
server {
server_name kong_mtls;
listen 0.0.0.0:9443 ssl deferred reuseport backlog=16384;
http2 on;
modsecurity ${{WAF}};
modsecurity_rules_file /usr/local/modsec_includes.conf;
#WAF_DEBUG_LEVEL level 0 = off 9 = on
modsecurity_rules '
SecRuleEngine ${{WAF_MODE}}
SecRequestBodyLimit ${{WAF_REQUEST_FILE_SIZE_LIMIT}}
SecRequestBodyNoFilesLimit ${{WAF_REQUEST_NO_FILE_SIZE_LIMIT}}
SecPcreMatchLimit ${{WAF_PCRE_MATCH_LIMIT}}
SecPcreMatchLimitRecursion ${{WAF_PCRE_MATCH_LIMIT_RECURSION}}
SecDebugLogLevel ${{WAF_DEBUG_LEVEL}}
SecAuditEngine ${{WAF_AUDIT}}
SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=${{WAF_PARANOIA_LEVEL}}"
';
# Mutual TLS logic, check if server port is 9443
ssl_verify_client on;
ssl_client_certificate /usr/local/kong/stargate-mutual-trust.ca;
ssl_crl /usr/local/kong/stargate-mutual-trust.crl.pem;
ssl_verify_depth 3;
error_page 400 404 405 408 411 412 413 414 417 /kong_error_handler;
error_page 494 =494 /kong_error_handler;
error_page 500 502 503 504 /kong_error_handler;
# Append the kong request id to the error log
# https://github.com/Kong/lua-kong-nginx-module#lua_kong_error_log_request_id
lua_kong_error_log_request_id $kong_request_id;
> if proxy_access_log_enabled then
> if custom_proxy_access_log then
access_log ${{PROXY_ACCESS_LOG}};
> else
access_log ${{PROXY_ACCESS_LOG}} kong_log_format;
> end
> else
access_log off;
> end
error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
client_body_buffer_size ${{CLIENT_BODY_BUFFER_SIZE}};
client_header_buffer_size 8k;
large_client_header_buffers 2 20k;
> if proxy_ssl_enabled then
# Hardcode the certificate for MTLS we want Stargate to present to clients
ssl_certificate /usr/local/kong/ssl/kongmtlscert.crt;
ssl_certificate_key /usr/local/kong/ssl/kongmtlsprivatekey.key;
ssl_session_cache shared:SSL:10m;
ssl_certificate_by_lua_block {
Kong.ssl_certificate()
}
> end
# injected nginx_proxy_* directives
> for _, el in ipairs(nginx_proxy_directives) do
$(el.name) $(el.value);
> end
> for i = 1, #trusted_ips do
set_real_ip_from $(trusted_ips[i]);
> end
rewrite_by_lua_block {
Kong.rewrite()
}
access_by_lua_block {
Kong.access()
}
header_filter_by_lua_block {
Kong.header_filter()
ngx.header["Server"] = nil
ngx.header["Via"] = nil
}
body_filter_by_lua_block {
Kong.body_filter()
}
log_by_lua_block {
Kong.log()
}
location / {
default_type '';
set $ctx_ref '';
set $upstream_te '';
set $upstream_host '';
set $upstream_upgrade '';
set $upstream_connection '';
set $upstream_scheme '';
set $upstream_uri '';
set $upstream_x_forwarded_for '';
set $upstream_x_forwarded_proto '';
set $upstream_x_forwarded_host '';
set $upstream_x_forwarded_port '';
set $upstream_x_forwarded_path '';
set $upstream_x_forwarded_prefix '';
set $kong_proxy_mode 'http';
proxy_http_version 1.1;
proxy_buffering on;
proxy_request_buffering on;
# injected nginx_location_* directives
> for _, el in ipairs(nginx_location_directives) do
$(el.name) $(el.value);
> end
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered_request {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering on;
proxy_request_buffering off;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @unbuffered_response {
internal;
default_type '';
set $kong_proxy_mode 'unbuffered';
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering on;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location @grpc {
internal;
default_type '';
set $kong_proxy_mode 'grpc';
grpc_set_header TE $upstream_te;
grpc_set_header X-Forwarded-For $upstream_x_forwarded_for;
grpc_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
grpc_set_header X-Forwarded-Host $upstream_x_forwarded_host;
grpc_set_header X-Forwarded-Port $upstream_x_forwarded_port;
grpc_set_header X-Forwarded-Path $upstream_x_forwarded_path;
grpc_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
grpc_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
grpc_set_header X-Kong-Request-Id $kong_request_id;
> end
grpc_pass_header Server;
grpc_pass_header Date;
grpc_ssl_name $upstream_host;
grpc_ssl_server_name on;
> if client_ssl then
grpc_ssl_certificate ${{CLIENT_SSL_CERT}};
grpc_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
grpc_pass $upstream_scheme://kong_upstream;
}
location = /kong_buffered_http {
internal;
default_type '';
set $kong_proxy_mode 'http';
rewrite_by_lua_block {
-- ngx.localtion.capture will create a new nginx request,
-- so the upstream ssl-related info attached to the `r` gets lost.
-- we need to re-set them here to the new nginx request.
local ctx = ngx.ctx
local upstream_ssl = require("kong.runloop.upstream_ssl")
upstream_ssl.set_service_ssl(ctx)
upstream_ssl.fallback_upstream_client_cert(ctx)
}
access_by_lua_block {;}
header_filter_by_lua_block {;}
body_filter_by_lua_block {;}
log_by_lua_block {;}
proxy_http_version 1.1;
proxy_set_header TE $upstream_te;
proxy_set_header Host $upstream_host;
proxy_set_header Upgrade $upstream_upgrade;
proxy_set_header Connection $upstream_connection;
proxy_set_header X-Forwarded-For $upstream_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $upstream_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $upstream_x_forwarded_host;
proxy_set_header X-Forwarded-Port $upstream_x_forwarded_port;
proxy_set_header X-Forwarded-Path $upstream_x_forwarded_path;
proxy_set_header X-Forwarded-Prefix $upstream_x_forwarded_prefix;
proxy_set_header X-Real-IP $remote_addr;
> if enabled_headers_upstream["X-Kong-Request-Id"] then
proxy_set_header X-Kong-Request-Id $kong_request_id;
> end
proxy_pass_header Server;
proxy_pass_header Date;
proxy_ssl_name $upstream_host;
proxy_ssl_server_name on;
> if client_ssl then
proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
proxy_pass $upstream_scheme://kong_upstream$upstream_uri;
}
location = /kong_error_handler {
internal;
default_type '';
rewrite_by_lua_block {;}
access_by_lua_block {;}
content_by_lua_block {
Kong.handle_error()
}
}
}
> end -- (role == "traditional" or role == "data_plane") and #proxy_listeners > 0
> if (role == "control_plane" or role == "traditional") and #admin_listeners > 0 then
server {
server_name kong_admin;
> for _, entry in ipairs(admin_listeners) do
listen $(entry.listener);
> end
> for _, entry in ipairs(admin_listeners) do
> if entry.http2 then
http2 on;
> break
> end
> end
access_log ${{ADMIN_ACCESS_LOG}};
error_log ${{ADMIN_ERROR_LOG}} ${{LOG_LEVEL}};
> if admin_ssl_enabled then
> for i = 1, #admin_ssl_cert do
ssl_certificate $(admin_ssl_cert[i]);
ssl_certificate_key $(admin_ssl_cert_key[i]);
> end
ssl_session_cache shared:AdminSSL:10m;
> end
# injected nginx_admin_* directives
> for _, el in ipairs(nginx_admin_directives) do
$(el.name) $(el.value);
> end
location / {
default_type application/json;
content_by_lua_block {
Kong.admin_content()
}
header_filter_by_lua_block {
Kong.admin_header_filter()
}
}
location /robots.txt {
return 200 'User-agent: *\nDisallow: /';
}
}
> end -- (role == "control_plane" or role == "traditional") and #admin_listeners > 0
server {
charset UTF-8;
server_name kong_worker_events;
listen unix:${{PREFIX}}/worker_events.sock;
access_log off;
location / {
content_by_lua_block {
require("resty.events.compat").run()
}
}
}
}
from kong.
I don't think its ENV variable or template specific though because I can point at a smaller dev db and none of the error logs come up but as soon as I point to the larger DB(just changing postgres hosts and tables to startup and read from) with more resources in it is when the errors show up. I am thinking to prove it out more by adding a ton of dummy data to my dev db environment and see if when i bloat that pg instance with tons of routes/services/plugins/consumer resources that it then mirrors the errors in startup output similar to what happens when pointing at our much larger stage postgres database instance. 50 proxies vs 11,000 proxies.
If I can reproduce it w injecting a ton of dummy data maybe I can drop a posgres dump sql file yall can import to see it behave so in yalls testings too when you import all that data into a sandbox/lab test.
from kong.
@StarlightIbuki @chronolaw To give more context on available resources I have k8s pods starting up with as much as 4 cpus and 10Gi(nginx I set it to spin up 4 nginx worker proce:
resources:
limits:
cpu: "4"
memory: 10Gi
requests:
cpu: "4"
memory: 10Gi
Which is no small potatoes in terms of compute or memory.
Size of Dev DB(no errors on Kong startup):
postgres=# \l+
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+-------------+----------+---------+-------+------------------------------+---------+------------+--------------------------------------------
kong_dev2 | XXXXX | UTF8 | C | C | =Tc/XXXXX +| 14 MB | pg_default |
Size of Stage DB(errors on Kong startup, 82 MB is still fairly small DB size IMO):
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-------------+-------------+----------+---------+-------+------------------------------+---------+------------+--------------------------------------------
kong_stage2 | XXXXX | UTF8 | C | C | =Tc/XXXXX +| 82 MB | pg_default |
from kong.
schema_meta looks correct too for the stage db(all elements are up to 360 for latest kong version):
kong_stage2=# select * from schema_meta;
key | subsystem | last_executed | executed
| pending
-------------+-----------------------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------+---------
schema_meta | acl | 004_212_to_213 | {000_base_acl,002_130_to_140,003_200_to_210,004_212_to_213}
| {}
schema_meta | jwt | 003_200_to_210 | {000_base_jwt,002_130_to_140,003_200_to_210}
| {}
schema_meta | core | 016_280_to_300 | {000_base,003_100_to_110,004_110_to_120,005_120_to_130,006_130_to_140,007_140_to_150,008_150_to_200,009_200_to_210,010_210_to_211,011_212_to_213,012_213_to_220,013_220_to_230,0
14_230_to_270,015_270_to_280,017_300_to_310,018_310_to_320,019_320_to_330,020_330_to_340,021_340_to_350,022_350_to_360,023_360_to_370,016_280_to_300} | {}
schema_meta | acme | 003_350_to_360 | {000_base_acme,001_280_to_300,002_320_to_330,003_350_to_360}
| {}
schema_meta | ai-proxy | 001_360_to_370 | {001_360_to_370}
| {}
schema_meta | basic-auth | 003_200_to_210 | {000_base_basic_auth,002_130_to_140,003_200_to_210}
| {}
schema_meta | bot-detection | 001_200_to_210 | {001_200_to_210}
| {}
schema_meta | hmac-auth | 003_200_to_210 | {000_base_hmac_auth,002_130_to_140,003_200_to_210}
| {}
schema_meta | http-log | 001_280_to_300 | {001_280_to_300}
| {}
schema_meta | ip-restriction | 001_200_to_210 | {001_200_to_210}
| {}
schema_meta | key-auth | 003_200_to_210 | {000_base_key_auth,002_130_to_140,004_320_to_330,003_200_to_210}
| {}
schema_meta | opentelemetry | 001_331_to_332 | {001_331_to_332}
| {}
schema_meta | post-function | 001_280_to_300 | {001_280_to_300}
| {}
schema_meta | pre-function | 001_280_to_300 | {001_280_to_300}
| {}
schema_meta | oauth2 | 007_320_to_330 | {000_base_oauth2,003_130_to_140,004_200_to_210,005_210_to_211,006_320_to_330,007_320_to_330}
| {}
schema_meta | session | 002_320_to_330 | {000_base_session,001_add_ttl_index,002_320_to_330}
|
schema_meta | rate-limiting | 006_350_to_360 | {000_base_rate_limiting,003_10_to_112,004_200_to_210,005_320_to_330,006_350_to_360}
| {}
schema_meta | response-ratelimiting | 001_350_to_360 | {000_base_response_rate_limiting,001_350_to_360}
| {}
(18 rows)
from kong.
Plugins takes the cake for largest table expectedly:
kong_stage2=# \dt+
List of relations
Schema | Name | Type | Owner | Persistence | Access method | Size | Description
--------+-------------------------------+-------+-------------+-------------+---------------+------------+-------------
public | acls | table | XXXX | permanent | heap | 9232 kB |
public | acme_storage | table | XXXX | permanent | heap | 8192 bytes |
public | basicauth_credentials | table | XXXX | permanent | heap | 8192 bytes |
public | ca_certificates | table | XXXX | permanent | heap | 8192 bytes |
public | certificates | table | XXXX | permanent | heap | 96 kB |
public | cluster_events | table | XXXX | permanent | heap | 1136 kB |
public | clustering_data_planes | table | XXXX | permanent | heap | 8192 bytes |
public | clustering_rpc_requests | table | XXXX | permanent | heap | 8192 bytes |
public | consumers | table | XXXX | permanent | heap | 272 kB |
public | filter_chains | table | XXXX | permanent | heap | 8192 bytes |
public | hmacauth_credentials | table | XXXX | permanent | heap | 8192 bytes |
public | jwt_secrets | table | XXXX | permanent | heap | 488 kB |
public | key_sets | table | XXXX | permanent | heap | 8192 bytes |
public | keyauth_credentials | table | XXXX | permanent | heap | 8192 bytes |
public | keys | table | XXXX | permanent | heap | 8192 bytes |
public | locks | table | XXXX | permanent | heap | 48 kB |
public | oauth2_authorization_codes | table | XXXX | permanent | heap | 8192 bytes |
public | oauth2_credentials | table | XXXX | permanent | heap | 664 kB |
public | oauth2_tokens | table | XXXX | permanent | heap | 8192 bytes |
public | parameters | table | XXXX | permanent | heap | 48 kB |
public | plugins | table | XXXX | permanent | heap | 19 MB |
public | ratelimiting_metrics | table | XXXX | permanent | heap | 8192 bytes |
public | response_ratelimiting_metrics | table | XXXX | permanent | heap | 8192 bytes |
public | routes | table | XXXX | permanent | heap | 2760 kB |
public | schema_meta | table | XXXX | permanent | heap | 56 kB |
public | services | table | XXXX | permanent | heap | 2832 kB |
public | sessions | table | XXXX | permanent | heap | 8192 bytes |
public | sm_vaults | table | XXXX | permanent | heap | 8192 bytes |
public | snis | table | XXXX | permanent | heap | 48 kB |
public | tags | table | XXXX | permanent | heap | 5464 kB |
public | targets | table | XXXX | permanent | heap | 8192 bytes |
public | upstreams | table | XXXX | permanent | heap | 256 kB |
public | workspaces | table | XXXX | permanent | heap | 48 kB |
(33 rows)
kong_stage2=# select count(*) from plugins;
count
-------
28818
Maybe next step I will take the kong_stage2
and clone it, then reduce the resources in the tables by like half(cut services/routes/plugins etc.) and launch kong on that. See if no errors on startup. If still errors keep reducing resource footprint by half until the errors stop if at all. If I remove all resources and still errors out then that takes away my hypothesis that it had to do with amount of kong resources in the db tables.
from kong.
Recently we upgraded lua-resty-events library to 0.3.0 (https://github.com/Kong/lua-resty-events/releases/tag/0.3.0), but it is not included in 3.7.x now. This version fixed some potential bugs, could you try it with the master branch? thanks.
from kong.
@chronolaw is the latest on luarocks(can install the latest version after the kong build w an extra luarocks install line) or can I force a sed command during kong bazel build steps to pull the newer version of it and should work ontop of 3.7.1 without further changes needed out of the box?
from kong.
I think that this issue has no relationship with db or memory size, it seems that the events broker ( lua-resty-events) is not ready when events worker trying to connect,
lua-resty-events is not a luarocks library, perhaps we should build latest kong with bazel.
from kong.
@chronolaw looks like the source ends up in:
/usr/local/openresty/lualib/resty/events/*
Path for a traditional Kong bazel build image I have.
I can just add to the image to dump that codes tag https://github.com/Kong/lua-resty-events/tree/0.3.0 and overwrite the files in that path before Kong startup and report back to yah my startup logs.
Edit - I put all the events files into that lua-resty-events patches folder and will copy it over into the events dir after cleaning it:
# Patch the events library to 0.3.0, delete original src files.
RUN rm -rf /usr/local/openresty/lualib/resty/events
COPY patches/lua-resty-events /usr/local/openresty/lualib/resty/events
from kong.
Patch files added to dev w the smaller db, initial runtime looks good here:
2024-06-26T16:11:48.650551282-05:00 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:10
2024-06-26T16:11:48.651024522-05:00 nginx: [warn] load balancing method redefined in /usr/local/kong/nginx.conf:129
2024-06-26T16:11:49.360159772-05:00 2024/06/26 21:11:48 [notice] 30#0: [lua] init.lua:788: init(): [request-debug] token for request debugging: 22fce1a3-7021-4054-bc31-283ab226694a
2024-06-26T16:11:49.394999758-05:00 2024/06/26 21:11:48 [notice] 30#0: using the "epoll" event method
2024-06-26T16:11:49.395065492-05:00 2024/06/26 21:11:48 [notice] 30#0: openresty/1.25.3.1
2024-06-26T16:11:49.395069540-05:00 2024/06/26 21:11:48 [notice] 30#0: OS: Linux 4.18.0-553.5.1.el8_10.x86_64
2024-06-26T16:11:49.395073507-05:00 2024/06/26 21:11:48 [notice] 30#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024-06-26T16:11:49.395097222-05:00 2024/06/26 21:11:48 [notice] 30#0: start worker processes
2024-06-26T16:11:49.397169630-05:00 2024/06/26 21:11:48 [notice] 30#0: start worker process 49
2024-06-26T16:11:49.398037252-05:00 2024/06/26 21:11:48 [notice] 30#0: start worker process 50
2024-06-26T16:11:49.399031642-05:00 2024/06/26 21:11:48 [notice] 30#0: start worker process 51
2024-06-26T16:11:49.420188119-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] broker.lua:218: init(): event broker is ready to accept connections on worker #0, context: init_worker_by_lua*
2024-06-26T16:11:49.447407811-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.484335314-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'services' into the core_cache (in 36ms), context: init_worker_by_lua*
2024-06-26T16:11:49.484370089-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'consumers' into the cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.500464404-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'consumers' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:11:49.500484572-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'acls' into the cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.763717951-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'acls' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:11:49.763739081-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'certificates' into the core_cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.777581239-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'certificates' into the core_cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:11:49.777612247-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'jwt_secrets' into the cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.790418587-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'jwt_secrets' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:11:49.790455116-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'oauth2_credentials' into the cache..., context: init_worker_by_lua*
2024-06-26T16:11:49.804057693-05:00 2024/06/26 21:11:49 [notice] 49#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'oauth2_credentials' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:11:49.830575595-05:00 2024/06/26 21:11:49 [notice] 49#0: *1316 [lua] warmup.lua:47: warming up DNS entries ..., context: ngx.timer
2024-06-26T16:11:49.837250912-05:00 2024/06/26 21:11:49 [notice] 51#0: *5 [lua] worker.lua:286: communicate(): worker #2 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:11:49.837290336-05:00 2024/06/26 21:11:49 [notice] 50#0: *658 [lua] worker.lua:286: communicate(): worker #1 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:11:49.837608975-05:00 2024/06/26 21:11:49 [notice] 49#0: *1310 [lua] broker.lua:270: run(): worker #2 connected to events broker (worker pid: 51), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:11:49.837620176-05:00 2024/06/26 21:11:49 [notice] 49#0: *1311 [lua] broker.lua:270: run(): worker #1 connected to events broker (worker pid: 50), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:11:49.837696791-05:00 2024/06/26 21:11:49 [notice] 49#0: *1313 [lua] worker.lua:286: communicate(): worker #0 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:11:49.837853305-05:00 2024/06/26 21:11:49 [notice] 49#0: *1958 [lua] broker.lua:270: run(): worker #0 connected to events broker (worker pid: 49), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:12:14.444972971-05:00 2024/06/26 21:12:14 [notice] 49#0: *1316 [lua] warmup.lua:81: finished warming up DNS entries' into the cache (in 24615ms), context: ngx.timer
Now let me try it against the bigger database with more resources where I see the problem in the first place.
from kong.
Same issues persist even with the 0.3.0 patch files dropped in with the bigger DB(stage database) and resources:
2024-06-26T16:33:36.307545718-05:00 nginx: [warn] load balancing method redefined in /usr/local/kong/nginx.conf:129
2024-06-26T16:33:38.088352910-05:00 2024/06/26 21:33:36 [notice] 30#0: [lua] init.lua:788: init(): [request-debug] token for request debugging: 54ee0f5d-3690-492d-8749-0437d9d3da7e
2024-06-26T16:33:38.226529883-05:00 2024/06/26 21:33:36 [notice] 30#0: using the "epoll" event method
2024-06-26T16:33:38.226549148-05:00 2024/06/26 21:33:36 [notice] 30#0: openresty/1.25.3.1
2024-06-26T16:33:38.226551984-05:00 2024/06/26 21:33:36 [notice] 30#0: OS: Linux 4.18.0-553.5.1.el8_10.x86_64
2024-06-26T16:33:38.226554719-05:00 2024/06/26 21:33:36 [notice] 30#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024-06-26T16:33:38.226575297-05:00 2024/06/26 21:33:36 [notice] 30#0: start worker processes
2024-06-26T16:33:38.229627113-05:00 2024/06/26 21:33:36 [notice] 30#0: start worker process 48
2024-06-26T16:33:38.231224469-05:00 2024/06/26 21:33:36 [notice] 30#0: start worker process 49
2024-06-26T16:33:38.232770851-05:00 2024/06/26 21:33:36 [notice] 30#0: start worker process 50
2024-06-26T16:33:38.234460179-05:00 2024/06/26 21:33:36 [notice] 30#0: start worker process 51
2024-06-26T16:33:38.268040005-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] broker.lua:218: init(): event broker is ready to accept connections on worker #0, context: init_worker_by_lua*
2024-06-26T16:33:38.344812017-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by_lua*
2024-06-26T16:33:38.638481995-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'services' into the core_cache (in 293ms), context: init_worker_by_lua*
2024-06-26T16:33:38.638498075-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'consumers' into the cache..., context: init_worker_by_lua*
2024-06-26T16:33:38.685365008-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'consumers' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:33:38.685381779-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'acls' into the cache..., context: init_worker_by_lua*
2024-06-26T16:33:43.446588073-05:00 2024/06/26 21:33:43 [error] 50#0: *6 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:43.452284416-05:00 2024/06/26 21:33:43 [error] 49#0: *276 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:43.476356347-05:00 2024/06/26 21:33:43 [error] 51#0: *1301 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:48.468408174-05:00 2024/06/26 21:33:48 [error] 49#0: *1964 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:48.494021256-05:00 2024/06/26 21:33:48 [error] 51#0: *1966 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:48.497310485-05:00 2024/06/26 21:33:48 [error] 50#0: *1968 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:53.485180350-05:00 2024/06/26 21:33:53 [error] 49#0: *1970 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:53.525941146-05:00 2024/06/26 21:33:53 [error] 51#0: *1972 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:53.537179625-05:00 2024/06/26 21:33:53 [error] 50#0: *1974 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:58.509082661-05:00 2024/06/26 21:33:58 [error] 49#0: *1976 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:58.539892536-05:00 2024/06/26 21:33:58 [error] 51#0: *1978 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:33:58.549879163-05:00 2024/06/26 21:33:58 [error] 50#0: *1980 [lua] worker.lua:270: communicate(): failed to connect: failed to receive response header: timeout, context: ngx.timer
2024-06-26T16:34:02.243400980-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'acls' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:34:02.243434763-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'certificates' into the core_cache..., context: init_worker_by_lua*
2024-06-26T16:34:02.258343491-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'certificates' into the core_cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:34:02.258370020-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'jwt_secrets' into the cache..., context: init_worker_by_lua*
2024-06-26T16:34:02.336967554-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'jwt_secrets' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:34:02.336997960-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:114: single_dao(): Preloading 'oauth2_credentials' into the cache..., context: init_worker_by_lua*
2024-06-26T16:34:02.427902024-05:00 2024/06/26 21:33:38 [notice] 48#0: *1 [lua] warmup.lua:163: single_dao(): finished preloading 'oauth2_credentials' into the cache (in 0ms), context: init_worker_by_lua*
2024-06-26T16:34:02.801755430-05:00 2024/06/26 21:34:02 [notice] 48#0: *2007 [lua] warmup.lua:47: warming up DNS entries ..., context: ngx.timer
2024-06-26T16:34:02.808827066-05:00 2024/06/26 21:34:02 [error] 48#0: *1991 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.808906674-05:00 2024/06/26 21:34:02 [error] 48#0: *1992 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.808939996-05:00 2024/06/26 21:34:02 [error] 48#0: *1993 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.808944024-05:00 2024/06/26 21:34:02 [error] 48#0: *1994 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.808958511-05:00 2024/06/26 21:34:02 [error] 48#0: *1995 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.808964943-05:00 2024/06/26 21:34:02 [error] 48#0: *1996 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.809024363-05:00 2024/06/26 21:34:02 [error] 48#0: *1997 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.809032008-05:00 2024/06/26 21:34:02 [error] 48#0: *1998 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.809046935-05:00 2024/06/26 21:34:02 [error] 48#0: *1999 [lua] broker.lua:238: run(): failed to init socket: failed to flush response header: nginx output filter error, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.809491594-05:00 2024/06/26 21:34:02 [notice] 51#0: *1984 [lua] worker.lua:286: communicate(): worker #3 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:34:02.809524215-05:00 2024/06/26 21:34:02 [notice] 49#0: *1982 [lua] worker.lua:286: communicate(): worker #1 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:34:02.809529154-05:00 2024/06/26 21:34:02 [notice] 50#0: *1986 [lua] worker.lua:286: communicate(): worker #2 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:34:02.809826257-05:00 2024/06/26 21:34:02 [notice] 48#0: *2001 [lua] broker.lua:270: run(): worker #3 connected to events broker (worker pid: 51), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.810070032-05:00 2024/06/26 21:34:02 [notice] 48#0: *2002 [lua] broker.lua:270: run(): worker #2 connected to events broker (worker pid: 50), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.810621239-05:00 2024/06/26 21:34:02 [notice] 48#0: *2000 [lua] broker.lua:270: run(): worker #1 connected to events broker (worker pid: 49), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:02.810637489-05:00 2024/06/26 21:34:02 [notice] 48#0: *2004 [lua] worker.lua:286: communicate(): worker #0 is ready to accept events from unix:/usr/local/kong/worker_events.sock, context: ngx.timer
2024-06-26T16:34:02.810644612-05:00 2024/06/26 21:34:02 [notice] 48#0: *2649 [lua] broker.lua:270: run(): worker #0 connected to events broker (worker pid: 48), client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:07.808752272-05:00 2024/06/26 21:34:07 [error] 48#0: *1988 [lua] broker.lua:238: run(): failed to init socket: failed to read worker info: failed to receive the header bytes: timeout, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:07.808826561-05:00 2024/06/26 21:34:07 [error] 48#0: *1989 [lua] broker.lua:238: run(): failed to init socket: failed to read worker info: failed to receive the header bytes: timeout, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
2024-06-26T16:34:07.808854232-05:00 2024/06/26 21:34:07 [error] 48#0: *1990 [lua] broker.lua:238: run(): failed to init socket: failed to read worker info: failed to receive the header bytes: timeout, client: unix:, server: kong_worker_events, request: "GET / HTTP/1.1", host: "localhost"
After those error logs things start to seem to run normal.
from kong.
Also if yall want me to hack any debug statements into the code to get a better understanding of whats happening I can. Just gotta give me the lines you may be curious about where to drop them in.
Am also curious if the consensus opinion is that this is mostly harmless for it to be doing this on startup and I am okay to take these errors into a production environment or if I should hold off. Normally when I see [errors] and miscommunications early on in something that would be an event distribution library for managing all kongs intra worker comms I would think thats a big issue, but if its just an early timing thing and kong is all heathy right after those logs are done to not spit it out anymore then I suppose no issue taking it to prod if its just a minor startup comms issue of things not being ready when called etc. Would like yalls opinion there tho.
Few other ideas I may have to see if helps stop the error logs:
- stop doing health checks on the container that force a DB PUT to a
upstreams
resource early on in my liveness/readiness probes. Maybe something about admin api calls early on in a larger kong deployment can cause issues there.
... any other ideas ill add here to this list.
from kong.
Related Issues (20)
- Odd Kong startup errors post 2.8.x to 3.7.0 upgrade HOT 5
- Kong can't match routes correctly with the same hosts, paths, and different protocols in the route. HOT 4
- AWS Lambda plugin in Gateway 2.8.4 missing aws_imds_protocol_version support HOT 1
- https over mtls is not working for upstream HOT 1
- Odd logs maybe potential bugs in 3.7.0 trying to wrap my head didn't see on 2.8.x HOT 9
- OTEL plugin: the header_type parameter is deprecated, please update your configuration to use propagation.extract and propagation.inject instead HOT 2
- misdocumented breaking change in v3 HOT 3
- Extra logging in 3.7.0 that should be a [debug] log than [notice] and differs from 2.8.x behavior HOT 7
- Trouble filtering plugins by tags parameter HOT 3
- go plugin excute time >= 60s will cause error (event callback failed; source=plugin_server, event=reset_instances) HOT 8
- Proxy Buffering Settings Cause Issues HOT 1
- Facing build issue on ppc64le architecture HOT 2
- Facing build issue on ppc64le architecture HOT 6
- Trouble filtering plugins by tags parameter HOT 2
- [Kong Ingress Controller Chart] High Load on k8s API with KongUpstreamPolicy and Multiple Ingress Controllers HOT 1
- 3.7.0 startup error logs, this time cloned my larger stage db into dev env(had to remake issue) HOT 1
- Support page identation error HOT 1
- Kong DNS resolving not respecting properly configured overrides RES_OPTIONS and LOCALDOMAIN at times. HOT 13
- Upstream response time mismatch. HOT 21
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kong.