Coder Social home page Coder Social logo

router's Introduction

openshift-router

This repository contains the OpenShift routers for NGINX, HAProxy, and F5. They read Route objects out of the OpenShift API and allow ingress to services. HAProxy is currently the reference implementation. See the details in each router image.

These images are managed by the cluster-ingress-operator in an OpenShift 4.0+ cluster.

The template router code (openshift-router) is generic and creates config files on disk based on the state of the cluster. The process launches proxies as children and triggers reloads as necessary after new config has been written. The standard logic for handling conflicting routes, supporting wildcards, reporting status back to the Route object, and metrics live in the standard process.

Deploying to Kubernetes

The OpenShift router can be run against a vanilla Kubernetes cluster, although some of the security protections present in the API are not possible with CRDs.

To deploy, clone this repository and then run:

$ kubectl create -f deploy/

You will then be able to create a Route that points to a service on your cluster and the router pod will forward your traffic from port 80 to your service endpoints. You can run the example like:

$ kubectl create -f example/

And access the router via the node it is located on. If you're running locally on minikube or another solution, just run:

$ curl http://localhost -H "Host: example.local"

to see your route and:

$ kubectl get routes

to see details of your routes.

router's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

router's Issues

The vulnerability CVE-2022-1677 has been fixed, but no specific tag denotes the patched version.

Hello, we are a team researching the dependency management mechanism of Golang. During our analysis, we came across your project and noticed that you have fixed a vulnerability (snyk references, CVE: CVE-2022-1677, CWE: CWE-287, fix commit id: 383d0f6). However, we observed that you have not tagged the fixing commit or its subsequent commits. As a result, users are unable to obtain the patch version through Go tool ‘go list’.

We kindly request your assistance in addressing this issue. Tagging the fixing commit or its subsequent commits will greatly benefit users who rely on your project and are seeking the patched version to address the vulnerability.

We greatly appreciate your attention to this matter and collaboration in resolving it. Thank you for your time and for your valuable contributions to our research.

Router working on Docker Desktop

Hello,

I have try to create the router object on my local Kubernetes (deployed with Docker Desktop).
All seems to work fine: POD started without error, route object has been created succesfully.

But I can't access with the curl command curl http://localhost "Host: example.local".

Has anyone succeeded to use router object with Kubernetes on Docker Desktop?

configure a permanent redirect (301) from one host to another via a route

Hunting for ways to configure a route (ocp 4.10 haproxy) to simply 301 redirect to a new hostname

HAProxy seems to support this with something like:

http-request redirect prefix http://api.domain.com code 301 if { hdr(host) -i 123.domain.com }

We do this with ingress-nginx via an annotation like:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dev-redirect
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/permanent-redirect: https://new-domain.dev/foo/bar
spec:
  ingressClassName: external
  rules:
    - host: dev.domain.dev
      http:
        paths:
          - backend:
              service:
                name: dev
                port:
                  number: 80
            pathType: ImplementationSpecific

is there really no way to achieve the same thing with openshift route?

As far as I can tell there also isnt a way to pass a server-snippet similar to what nginx ingress allows?

Container fails healthchecks when deploying Openshift cluster

Description

The service router fails to come up properly when deploying a fresh Openshift 3.11 cluster.

Steps To Reproduce
  1. Run through Ansible installation playbooks manually in the order specified in table 1 of this section: https://docs.openshift.com/container-platform/3.11/install/running_install.html#advanced-retrying-installation
  2. Keep an eye on oc get all after running playbooks/openshift-hosted/config.yml
  3. Note that the router pod does not start up correctly and enters an error state.
Expected Results

The router pod comes up without issue.

Observed Results

The router pod consistently fails the health check and eventually enters an error state.

oc get logs from the crashing pod only shows this:

I0212 18:53:59.815075       1 template.go:297] Starting template router (v3.11.0+d0c29df-98)
I0212 18:53:59.817021       1 metrics.go:147] Router health and metrics port listening at 0.0.0.0:1936 on HTTP and HTTPS
E0212 18:53:59.823534       1 haproxy.go:392] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory
I0212 18:54:29.279496       1 router.go:252] Router is including routes in all namespaces
Additional Information

I was able to get a shell into the container briefly before Openshift restarted it, and I noticed that the haproxy process was happily running, so it might be that the healthcheck is the only thing failing.

Operating system:

# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)

Router docker image:

docker.io/openshift/origin-haproxy-router    v3.11.0             c5b0420ab52a        3 days ago          407 MB

Dynamic config manager ignores route targetPort

Consider the following scenario with router 3.11:

  1. a service defines multiple ports (in the example below: 8080 and 8443)
  2. a route for that service sets targetPort so one of the service ports is preferred (in the example below: 8443)
  3. dynamic configuration manager is enabled

The initial HAProxy configuration after a full configuration reload is correct: it only includes the one port identified by targetPort. This can be verified using the HAProxy admin socket (starting with a single pod backing the service):

> show servers state be_tcp:test-haproxy-router:passthrough
# be_id be_name srv_id srv_name srv_addr srv_op_state srv_admin_state srv_uweight srv_iweight srv_time_since_last_change srv_check_status srv_check_result srv_check_health srv_check_state srv_agent_state bk_f_forced_id srv_f_forced_id srv_fqdn srv_port
44 be_tcp:test-haproxy-router:passthrough 1 pod:server-ssl-1-5m5n8:server-ssl:10.76.32.172:8443 10.76.32.172 2 0 256 256 27 6 3 4 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 2 _dynamic-pod-1 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 3 _dynamic-pod-2 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 4 _dynamic-pod-3 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 5 _dynamic-pod-4 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 6 _dynamic-pod-5 172.4.0.4 0 5 1 1 27 1 0 0 14 0 0 0 - 8765

Let's now scale the deploymentConfig backing the multi-port service to 2 pods.

Expected behavior

The dynamic list of HAProxy backend servers is updated with a single new endpoint pointing to the new pod and the route's targetPort

Actual behaviour

The dynamic list of HAProxy backend servers is updated with servers for each port of the service, ignoring targetPort

> show servers state be_tcp:test-haproxy-router:passthrough
# be_id be_name srv_id srv_name srv_addr srv_op_state srv_admin_state srv_uweight srv_iweight srv_time_since_last_change srv_check_status srv_check_result srv_check_health srv_check_state srv_agent_state bk_f_forced_id srv_f_forced_id srv_fqdn srv_port
44 be_tcp:test-haproxy-router:passthrough 1 pod:server-ssl-1-5m5n8:server-ssl:10.76.32.172:8443 10.76.32.172 2 0 256 256 199 6 3 4 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 2 _dynamic-pod-1 10.76.19.57 0 4 1 1 2 8 2 0 6 0 0 0 - 8443
44 be_tcp:test-haproxy-router:passthrough 3 _dynamic-pod-2 10.76.19.57 0 4 1 1 2 8 2 0 6 0 0 0 - 8080
44 be_tcp:test-haproxy-router:passthrough 4 _dynamic-pod-3 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 5 _dynamic-pod-4 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765
44 be_tcp:test-haproxy-router:passthrough 6 _dynamic-pod-5 172.4.0.4 0 5 1 1 199 1 0 0 14 0 0 0 - 8765

It looks like this is due to the dynamic router using directly service.EndpointTable here and here instead of filtering ports with the template's endpointsForAlias method

I am testing a fix, will submit a PR shortly

[Question] not under active development anymore, do you have alternative plans for haproxy router?

Hi everyone, I'm monitoring the development of the openshift/router for several months and it looks like Openshift doesn't have any plan on developing haproxy router anymore. It feels like it's an abandoned project, This is the list of features many ingress/gateway API controllers are supporting, but not in openshift haproxy router so far:

  • (True) Hot reload - in a mid or large cluster, it's reloaded almost every 5 seconds and creates lots of haproxy processes - (I know they can be adjusted, but the result is still not good, especially if you have many websocket/long-running connections)
  • Data plain API v2 is not supported (it can be used for hot reload)
  • Supporting HTTP2 is still limited (edge termination, custom certificate, grpc, ...)
  • Less visibility on incoming traffic (e.g. only response time of last 1024 requests are monitored by haproxy, no histogram metrics, ...) - requires a custom-made log parser
  • Tracing is not supported
  • Mirroring/Traffic-shadowing and canary release are not supported
  • Even with having Ingress/V1 and GatewayAPI/v1Beta, Route object is still the first-citizen object in openshift platform for ingress traffic + no console UI feature for the other official objects
  • Limited integration with 3rd-party tools (argo-rollout, flux, ...)
  • No support for Global rate limiting
  • No support for External authentication and authorization
  • No support for Basic and JWT authentication
  • Using older versions of HAProxy (2.2)
  • No plan for supporting http3/quic (it's added to haproxy 2.6+)
  • Routing based on header/query string
  • Modifying/adding/removing headers
  • Not supporting GatewayAPI
  • Not having a clear roadmap
  • ...

With that said, my question is that is this project has a future? or it's just an abandoned project and openshift is working on or planning for alternative options?

Missing openshift_default backend servers?

For a few of the frontend's it seems the default_backend is openshift_default. However it looks like there are no servers listed under the openshift_default backend heading. This leads to 503 errors under certain conditions (if a request falls through the ACL rules to default_backend).

For example if an non-SNI SSL request comes through and you have termination passthrough it will fall through to the default_backend and the server will return a 503.

Similar to what is described here: openshift/origin#23223.

Missing haproxy24-2.4.1-1.el8.x86_64.rpm for hacking

I followed the steps in the HACKING.md guide.

Unfortunately, the build was failed with the following error message:

[MIRROR] haproxy24-2.4.1-1.el8.x86_64.rpm: Status code: 404 for https://github.com/frobware/haproxy-hacks/raw/master/RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm (IP: 140.82.121.3)
[MIRROR] haproxy24-2.4.1-1.el8.x86_64.rpm: Status code: 404 for https://github.com/frobware/haproxy-hacks/raw/master/RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm (IP: 140.82.121.3)
[MIRROR] haproxy24-2.4.1-1.el8.x86_64.rpm: Status code: 404 for https://github.com/frobware/haproxy-hacks/raw/master/RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm (IP: 140.82.121.3)
[MIRROR] haproxy24-2.4.1-1.el8.x86_64.rpm: Status code: 404 for https://github.com/frobware/haproxy-hacks/raw/master/RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm (IP: 140.82.121.3)
[FAILED] haproxy24-2.4.1-1.el8.x86_64.rpm: Status code: 404 for https://github.com/frobware/haproxy-hacks/raw/master/RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm (IP: 140.82.121.3)

I tried to download the listed file, but I got an error message:

The 'frobware/haproxy-builds' repository doesn't contain the 'RPMs/haproxy24-2.4.1-1.el8.x86_64.rpm' path in 'master'.

As I see, it relates to the moving of RPMs folder of haproxy-builds:

Moved to https://github.com/frobware/haproxy-builds/tree/master/RPMs

Unfortunately, this file is missing at the mentioned repository. As I see, it is impossible to build a hacking without this file.

Graceful Shutdown not happening on SIGTERM

The documentation of HAProxy is a bit misleading on its graceful shutdown procedures. Section 4 of haproxy.org management.txt talks about SIGTERM, SIGUSR1, -sf and -st.

HAProxy supports a graceful and a hard stop.

But it then confuses the discussion with the follow on paragraphs and appears to be talking from a systemd or initd point of view.

In Kubernetes/OpenShift only SIGTERM and SIGKILL are sent to containers. for reference kubernetes pods life SIGUSR1, or the -sf or -st arguments are not used when a pod is gracefully terminated.

When re-deploying a router the observed behavior matches the SIGTERM behavior as described in the first paragraph (when the
SIGTERM signal is sent to the haproxy process, it immediately quits and all
established connections are closed)

Adding a lifecycle:preStop command that does a kill -USR1 $(pidof haproxy); while killall -0 haproxy; do sleep 1; done changes the behavior of the router to match the "graceful stop" behavior of HAProxy.

gosec findings

Hello!

I just ran gosec to scan the go source code statically and got multiple errors. In branch release-4.3. My question is do you plan to use gosec during the build or before releases?

gosec ./... 
[gosec] 2020/10/27 14:47:47 Including rules: default
[gosec] 2020/10/27 14:47:47 Excluding rules: default
[gosec] 2020/10/27 14:47:47 Import directory: /openshift-router/pkg/version
[gosec] 2020/10/27 14:47:47 Checking package: version
[gosec] 2020/10/27 14:47:47 Checking file: /openshift-router/pkg/version/version.go
[gosec] 2020/10/27 14:47:47 Import directory: /openshift-router/pkg/cmd/infra/router
[gosec] 2020/10/27 14:48:00 Checking package: router
[gosec] 2020/10/27 14:48:00 Checking file: /openshift-router/pkg/cmd/infra/router/clientcmd.go
[gosec] 2020/10/27 14:48:00 Checking file: /openshift-router/pkg/cmd/infra/router/router.go
[gosec] 2020/10/27 14:48:00 Checking file: /openshift-router/pkg/cmd/infra/router/template.go
[gosec] 2020/10/27 14:48:00 Import directory: /openshift-router/pkg/router/controller/factory
[gosec] 2020/10/27 14:48:05 Checking package: factory
[gosec] 2020/10/27 14:48:05 Checking file: /openshift-router/pkg/router/controller/factory/doc.go
[gosec] 2020/10/27 14:48:05 Checking file: /openshift-router/pkg/router/controller/factory/factory.go
[gosec] 2020/10/27 14:48:05 Import directory: /openshift-router/pkg/router/metrics
[gosec] 2020/10/27 14:48:06 Checking package: metrics
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/metrics/health.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/metrics/metrics.go
[gosec] 2020/10/27 14:48:06 Import directory: /openshift-router/pkg/router/template/configmanager/haproxy
[gosec] 2020/10/27 14:48:06 Checking package: haproxy
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/backend.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/blueprint_plugin.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/client.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/converter.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/manager.go
[gosec] 2020/10/27 14:48:06 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/map.go
[gosec] 2020/10/27 14:48:06 Import directory: /openshift-router/pkg/router/writerlease
[gosec] 2020/10/27 14:48:07 Checking package: writerlease
[gosec] 2020/10/27 14:48:07 Checking file: /openshift-router/pkg/router/writerlease/writerlease.go
[gosec] 2020/10/27 14:48:07 Import directory: /openshift-router/pkg/router/metrics/probehttp
[gosec] 2020/10/27 14:48:07 Checking package: probehttp
[gosec] 2020/10/27 14:48:07 Checking file: /openshift-router/pkg/router/metrics/probehttp/probehttp.go
[gosec] 2020/10/27 14:48:07 Import directory: /openshift-router/pkg/router/routeapihelpers
[gosec] 2020/10/27 14:48:07 Checking package: routeapihelpers
[gosec] 2020/10/27 14:48:07 Checking file: /openshift-router/pkg/router/routeapihelpers/helper.go
[gosec] 2020/10/27 14:48:07 Checking file: /openshift-router/pkg/router/routeapihelpers/validation.go
[gosec] 2020/10/27 14:48:07 Import directory: /openshift-router/pkg/router/template/configmanager/haproxy/testing
[gosec] 2020/10/27 14:48:07 Checking package: testing
[gosec] 2020/10/27 14:48:07 Checking file: /openshift-router/pkg/router/template/configmanager/haproxy/testing/haproxy.go
[gosec] 2020/10/27 14:48:07 Import directory: /openshift-router/pkg/router/controller/hostindex
[gosec] 2020/10/27 14:48:08 Checking package: hostindex
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/controller/hostindex/activation.go
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/controller/hostindex/hostindex.go
[gosec] 2020/10/27 14:48:08 Import directory: /openshift-router/pkg/router/template/util/haproxy
[gosec] 2020/10/27 14:48:08 Checking package: haproxy
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/template/util/haproxy/map_entry.go
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/template/util/haproxy/types.go
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/template/util/haproxy/whitelist.go
[gosec] 2020/10/27 14:48:08 Import directory: /openshift-router/pkg/router/template/util
[gosec] 2020/10/27 14:48:08 Checking package: util
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/template/util/map_paths.go
[gosec] 2020/10/27 14:48:08 Checking file: /openshift-router/pkg/router/template/util/util.go
[gosec] 2020/10/27 14:48:08 Import directory: /openshift-router/pkg/router/template
[gosec] 2020/10/27 14:48:09 Checking package: templaterouter
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/certmanager.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/fake.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/plugin.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/router.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/service_lookup.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/template_helper.go
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/types.go
[gosec] 2020/10/27 14:48:09 Import directory: /openshift-router/pkg/router/template/limiter
[gosec] 2020/10/27 14:48:09 Checking package: limiter
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/template/limiter/limiter.go
[gosec] 2020/10/27 14:48:09 Import directory: /openshift-router/pkg/router/unidling
[gosec] 2020/10/27 14:48:09 Checking package: unidling
[gosec] 2020/10/27 14:48:09 Checking file: /openshift-router/pkg/router/unidling/types.go
[gosec] 2020/10/27 14:48:09 Import directory: /openshift-router/cmd/openshift-router
[gosec] 2020/10/27 14:48:10 Checking package: main
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/cmd/openshift-router/main.go
[gosec] 2020/10/27 14:48:10 Import directory: /openshift-router/log
[gosec] 2020/10/27 14:48:10 Checking package: log
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/log/log.go
[gosec] 2020/10/27 14:48:10 Import directory: /openshift-router/pkg/router/controller
[gosec] 2020/10/27 14:48:10 Checking package: controller
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/contention.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/doc.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/extended_validator.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/host_admitter.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/router_controller.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/status.go
[gosec] 2020/10/27 14:48:10 Checking file: /openshift-router/pkg/router/controller/unique_host.go
[gosec] 2020/10/27 14:48:10 Import directory: /openshift-router/pkg/router
[gosec] 2020/10/27 14:48:11 Checking package: router
[gosec] 2020/10/27 14:48:11 Checking file: /openshift-router/pkg/router/doc.go
[gosec] 2020/10/27 14:48:11 Checking file: /openshift-router/pkg/router/interfaces.go
[gosec] 2020/10/27 14:48:11 Import directory: /openshift-router/pkg/router/metrics/haproxy
[gosec] 2020/10/27 14:48:11 Checking package: haproxy
[gosec] 2020/10/27 14:48:11 Checking file: /openshift-router/pkg/router/metrics/haproxy/haproxy.go
Results:

Golang errors in file: [/openshift-router/cmd/openshift-router/main.go]:

  > [line 19 : column 2] - could not import github.com/openshift/router/pkg/cmd/infra/router (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/cmd/infra/router/router.go]:

  > [line 24 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")

  > [line 25 : column 2] - could not import github.com/openshift/router/pkg/router/controller (invalid package name: "")

  > [line 26 : column 20] - could not import github.com/openshift/router/pkg/router/controller/factory (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/cmd/infra/router/template.go]:

  > [line 39 : column 2] - could not import github.com/openshift/router/pkg/router (invalid package name: "")

  > [line 41 : column 2] - could not import github.com/openshift/router/pkg/router/metrics (invalid package name: "")

  > [line 42 : column 2] - could not import github.com/openshift/router/pkg/router/metrics/haproxy (invalid package name: "")

  > [line 43 : column 17] - could not import github.com/openshift/router/pkg/router/template (invalid package name: "")

  > [line 44 : column 23] - could not import github.com/openshift/router/pkg/router/template/configmanager/haproxy (invalid package name: "")

  > [line 45 : column 2] - could not import github.com/openshift/router/pkg/router/writerlease (invalid package name: "")

  > [line 46 : column 2] - could not import github.com/openshift/router/pkg/version (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/extended_validator.go]:

  > [line 12 : column 2] - could not import github.com/openshift/router/pkg/router (invalid package name: "")

  > [line 13 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/factory/factory.go]:

  > [line 25 : column 2] - could not import github.com/openshift/router/pkg/router (invalid package name: "")

  > [line 26 : column 19] - could not import github.com/openshift/router/pkg/router/controller (invalid package name: "")

  > [line 27 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")

  > [line 29 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/hostindex/activation.go]:

  > [line 7 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/router_controller.go]:

  > [line 18 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/status.go]:

  > [line 19 : column 2] - could not import github.com/openshift/router/pkg/router/writerlease (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/controller/unique_host.go]:

  > [line 17 : column 2] - could not import github.com/openshift/router/pkg/router/controller/hostindex (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/metrics/haproxy/haproxy.go]:

  > [line 24 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/metrics/health.go]:

  > [line 15 : column 2] - could not import github.com/openshift/router/pkg/router/metrics/probehttp (invalid package name: "")

  > [line 16 : column 17] - could not import github.com/openshift/router/pkg/router/template (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/metrics/metrics.go]:

  > [line 21 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/metrics/probehttp/probehttp.go]:

  > [line 32 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/configmanager/haproxy/backend.go]:

  > [line 10 : column 17] - could not import github.com/openshift/router/pkg/router/template (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/configmanager/haproxy/manager.go]:

  > [line 18 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")

  > [line 20 : column 15] - could not import github.com/openshift/router/pkg/router/template/util (invalid package name: "")

  > [line 22 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/limiter/limiter.go]:

  > [line 9 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/plugin.go]:

  > [line 18 : column 14] - could not import github.com/openshift/router/pkg/router/unidling (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/router.go]:

  > [line 27 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")

  > [line 28 : column 2] - could not import github.com/openshift/router/pkg/router/template/limiter (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/template_helper.go]:

  > [line 17 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")

  > [line 18 : column 15] - could not import github.com/openshift/router/pkg/router/template/util (invalid package name: "")

  > [line 19 : column 14] - could not import github.com/openshift/router/pkg/router/template/util/haproxy (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/util/haproxy/map_entry.go]:

  > [line 7 : column 15] - could not import github.com/openshift/router/pkg/router/template/util (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/template/util/util.go]:

  > [line 10 : column 2] - could not import github.com/openshift/router/pkg/router/routeapihelpers (invalid package name: "")

  > [line 12 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")


Golang errors in file: [/openshift-router/pkg/router/writerlease/writerlease.go]:

  > [line 13 : column 7] - could not import github.com/openshift/router/log (invalid package name: "")



[/openshift-router/cmd/openshift-router/main.go:7] - G108 (CWE-): Profiling endpoint is automatically exposed on /debug/pprof (Confidence: HIGH, Severity: HIGH)
  > _ "net/http/pprof"


[/openshift-router/pkg/router/metrics/haproxy/haproxy.go:12] - G108 (CWE-): Profiling endpoint is automatically exposed on /debug/pprof (Confidence: HIGH, Severity: HIGH)
  > _ "net/http/pprof"


[/openshift-router/pkg/router/metrics/probehttp/probehttp.go:46] - G402 (CWE-295): TLS InsecureSkipVerify set true. (Confidence: HIGH, Severity: HIGH)
  > InsecureSkipVerify: true


[/openshift-router/pkg/router/template/certmanager.go:185] - G306 (CWE-): Expect WriteFile permissions to be 0600 or less (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.WriteFile(fileName, cert, 0644)


[/openshift-router/pkg/router/controller/contention.go:269] - G601 (CWE-): Implicit memory aliasing in for loop. (Confidence: MEDIUM, Severity: MEDIUM)
  > &old


[/openshift-router/pkg/cmd/infra/router/template.go:581] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(certFile)


[/openshift-router/pkg/cmd/infra/router/template.go:585] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(keyFile)


[/openshift-router/pkg/cmd/infra/router/template.go:601] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(certFile)


[/openshift-router/pkg/cmd/infra/router/template.go:606] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(keyFile)


[/openshift-router/pkg/router/template/template_helper.go:197] - G306 (CWE-): Expect WriteFile permissions to be 0600 or less (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.WriteFile(name, data, 0644)


[/openshift-router/pkg/router/template/router.go:862] - G401 (CWE-326): Use of weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > md5.Sum([]byte(key))


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:183] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(certPath)


[/openshift-router/pkg/router/template/router.go:476] - G601 (CWE-): Implicit memory aliasing in for loop. (Confidence: MEDIUM, Severity: MEDIUM)
  > &cfg


[/openshift-router/pkg/router/template/router.go:465] - G306 (CWE-): Expect WriteFile permissions to be 0600 or less (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.WriteFile(filepath.Join(r.dir, routeFile), data, 0644)


[/openshift-router/pkg/router/template/router.go:259] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(fileKeyName)


[/openshift-router/pkg/router/template/router.go:249] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
  > ioutil.ReadFile(fileCrtName)


[/openshift-router/pkg/router/template/router.go:5] - G501 (CWE-327): Blacklisted import crypto/md5: weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > "crypto/md5"


[/openshift-router/pkg/router/template/plugin.go:309] - G401 (CWE-326): Use of weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > md5.Sum([]byte(s))


[/openshift-router/pkg/router/controller/status.go:238] - G601 (CWE-): Implicit memory aliasing in for loop. (Confidence: MEDIUM, Severity: MEDIUM)
  > &ingress


[/openshift-router/pkg/router/template/plugin.go:4] - G501 (CWE-327): Blacklisted import crypto/md5: weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > "crypto/md5"


[/openshift-router/pkg/router/template/router.go:543] - G204 (CWE-78): Subprocess launched with function call as argument or cmd arguments (Confidence: HIGH, Severity: MEDIUM)
  > exec.Command(r.reloadScriptPath)


[/openshift-router/pkg/router/template/configmanager/haproxy/testing/haproxy.go:134] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > os.Remove(sockFile)


[/openshift-router/pkg/router/template/configmanager/haproxy/testing/haproxy.go:396] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > conn.Write([]byte(response))


[/openshift-router/pkg/router/template/configmanager/haproxy/testing/haproxy.go:132] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > net.DialTimeout("unix", sockFile, timeout)


[/openshift-router/pkg/router/template/configmanager/haproxy/testing/haproxy.go:47] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > os.Remove(name)


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:605] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.DisableServer(name)


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:548] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.EnableServer(name)


[/openshift-router/pkg/router/template/router.go:226] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > router.commitAndReload()


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:547] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.UpdateServerInfo(name, ep.IP, ep.Port, weight, weightIsRelative)


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:521] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.EnableServer(s.Name)


[/openshift-router/pkg/router/template/router.go:321] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > os.Remove(outName)


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:520] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.UpdateServerInfo(s.Name, ep.IP, ep.Port, weight, weightIsRelative)


[/openshift-router/pkg/router/template/configmanager/haproxy/manager.go:509] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > backend.DisableServer(s.Name)


[/openshift-router/pkg/router/template/router.go:523] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > file.Close()


[/openshift-router/pkg/router/template/router.go:526] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > file.Close()


[/openshift-router/pkg/router/metrics/haproxy/haproxy.go:377] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > f.Close()


[/openshift-router/pkg/router/template/configmanager/haproxy/client.go:177-191] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > utilwait.ExponentialBackoff(cmdWaitBackoff, func() (bool, error) {
		n++
		client := &haproxy.HAProxyClient{
			Addr:    c.socketAddress,
			Timeout: c.timeout,
		}
		buffer, cmdErr = client.RunCommand(cmd)
		if cmdErr == nil {
			return true, nil
		}
		if !isRetriable(cmdErr, cmd) {
			return false, cmdErr
		}
		return false, nil
	})


[/openshift-router/pkg/router/metrics/health.go:73] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > conn.SetDeadline(time.Now().Add(2 * time.Second))


[/openshift-router/pkg/cmd/infra/router/router.go:93] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > flag.MarkDeprecated("enable-ingress", "Ingress resources are now synchronized to routes automatically.")


[/openshift-router/pkg/cmd/infra/router/clientcmd.go:83] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > cobra.MarkFlagFilename(flags, overrideFlags.ClusterOverrideFlags.CertificateAuthority.LongName)


[/openshift-router/pkg/cmd/infra/router/clientcmd.go:82] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > cobra.MarkFlagFilename(flags, overrideFlags.AuthOverrideFlags.ClientKey.LongName)


[/openshift-router/pkg/cmd/infra/router/clientcmd.go:81] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > cobra.MarkFlagFilename(flags, overrideFlags.AuthOverrideFlags.ClientCertificate.LongName)


[/openshift-router/pkg/router/metrics/haproxy/haproxy.go:353] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > resp.Body.Close()


[/openshift-router/pkg/router/metrics/haproxy/haproxy.go:367] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > f.Close()


[/openshift-router/pkg/router/metrics/haproxy/haproxy.go:373] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > f.Close()


[/openshift-router/pkg/cmd/infra/router/clientcmd.go:69] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
  > cobra.MarkFlagFilename(flags, "config")


Summary:
   Files: 47
   Lines: 10864
   Nosec: 0
  Issues: 46

Thanks!

Hitless reload not working

On openshift 4.10.15 and other recent versions that we've been on the haproxy router pods will accumulate processes from long lived connections unless the ingress.operator.openshift.io/hard-stop-after annotation forces disconnections. It does not appear that hitless reload, which was supposed to be enabled with 5ae85d7, is working. This means that on any update of the haproxy config, such as adding or removing a route, all connections are disconnected.

Enabling tcpka for OpenShift Route

Is it possible to influence the the haproxy tcpka TCP keepalive) settings from the OpenShift Route? Checking the source code, I don't see an annotation (something like haproxy.router.openshift.io/tcpka) that permits this to be turned on (and it is turned off by default)? We are on OpenShift 4.9.

Our use-case is non HTTP (kafka protocol). I want to enable TCP keep-alive in the hope of detecting dead connections that we occasionally see in our environment.

Would a PR be considered?

Make HAProxy config editable via environment of router

Currently, it is not easily possible to edit the config of the HAProxy running in the automatically generated router pod of the standard ingress operator of OKD.
This would be necessary to, for example, use larger GET-Requests through the standard OKD ingress operator by changing tune.bufsize in the haproxy.cfg to a bigger value.

Is it possible to make these kind of config changes via environment variables of the ingress deployment for the router pods?

Router metrics don't expose 4xx

Is there a reason the router metrics don't expose any 4xx?
We actually need those 4xx metrics for debugging and alerting.

Router pod gets stuck on haproxy reload

Recently we encountered an issue with openshift router, where it got stuck on one of the reloads and didn't recover.

Logs:

E0418 04:54:32.622869       1 haproxy.go:442] unexpected error while reading CSV: read unix @->/var/lib/haproxy/run/haproxy.sock.22.tmp: i/o timeout
E0418 04:54:33.694379       1 limiter.go:165] error reloading router: exit status 1
[WARNING] 107/045432 (653398) : Failed to get the number of sockets to be transferred !
[ALERT] 107/045432 (653398) : Failed to get the sockets from the old process!

openshift version:

~ ➜ oc version
Client Version: 4.6.0
Server Version: 4.6.0-0.okd-2021-02-14-205305
Kubernetes Version: v1.19.2-1049+f173eb4a83e557-dirty

Expose histogram for http requests

I can see that the router exposes haproxy_server_http_average_response_latency_milliseconds but that is not very helpful when trying to use metrics to calculate the 90th or 99th percentile for example. Is it possible to get these somehow from haproxy?

Router fails to start after deploying a fresh install of a 3.11 cluster

After a fresh install of a 3.11 cluster the router does not start..

All I have as a diagnosis stuff is that :

# oc logs pod/router-2-deploy
--> Scaled older deployment router-1 down
--> Scaling router-2 to 1
error: update acceptor rejected router-2: pods for rc 'default/router-2' took longer than 600 seconds to become available

When it is trying to launch the actual router pod the only thing I have is that:

# oc logs pod/router-2-29vg7
I1115 17:33:15.874559       1 template.go:297] Starting template router (v3.11.0+bd0bee4-337)
I1115 17:33:15.878996       1 metrics.go:147] Router health and metrics port listening at 0.0.0.0:1936 on HTTP and HTTPS
E1115 17:33:15.890563       1 haproxy.go:392] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory

For reference I give you the diagnostic output : oc_adm_diagnostics.txt

And my inventory file: inventory.txt

Router overwrites X-Forwarded-Host/Port/Proto headers

The router's current behavior overwrites X-Forwarded-Host, X-Forwarded-Port, and X-Forwarded-Proto. Instead, I believe that a more appropriate behavior would be to preserve these header values, so the existing values will be passed downstream.

New config changes added in PR #134 allow for this behavior to be adjusted at the route level via annotations, but when using the append option (the new default), these headers are still replaced.

Example

So imagine the following:

  • We have a setup where there's a load balancer sitting between clients and the OpenShift cluster
  • This load balancer strips HTTPS, sending HTTP to the cluster but preserving the client values with X-Forwarded-* headers
  • There's an application running in OpenShift which handles requests for the public-facing endpoint "https://myapp.openshift.company.com"

When receiving requests, this app wants to know what protocol/port a client uses to connect to this endpoint. It can know this by looking at the X-Forwarded-Proto/Port headers passed to it from upstream systems, but this is only true if all upstream systems preserve these values passed by the client. In this router implementation, the load balancer will pass "X-Forwarded-Proto: https, X-Forwarded-Port: 443", but the router will overwrite these headers with "X-Forwarded-Proto: http, X-Forwarded-Port: 80", losing the real values passed by the originating client to the public-facing endpoint, so the application won't know if the client is actually connecting over HTTP or HTTPS.

Instead, if the router preserved these headers, the app would receive "X-Forwarded-Proto: https, X-Forwarded-Port: 443" and know that the client connected to the public-facing endpoint via HTTPS:443.

Certificate issues

I honestly have no idea where to even put this, so I'm very sorry if this is the wrong place, but I just followed this guide on replacing the cluster certificates with a ca-bundle and certificate/key pair that are 100% correct and for whatever reason i'm now getting issues on my network operator saying it can't verify it

image

image

Because of the network operator issue I think the machine-config resync is failing, or maybe that's a totally different issue honestly I have no idea. I've been trying to tackle this for such a long time. Any insight as to what the heck is going on would be incredible. Thank you.

Dynamic config manager not functional in 4.4

While trying the 4.4 router, I noticed that enabling the dynamic configuration manager in a custom router deployment results in the router failing to start.

The following error is seen in the router pod logs:

E0324 18:57:02.396551       1 limiter.go:165] error executing template for file /var/lib/haproxy/conf/haproxy.config: template: haproxy-config.template:498:65: executing "/var/lib/haproxy/conf/haproxy.config" at <$cfgIdx>: wrong type for value; expected string; got templaterouter.ServiceAliasConfigKey

It seems that the change of type in #51 forgot some functions used only with the dynamic configuration manager. It looks like an easy fix that I'm happy to provide.

But that raises a question: @ramr @smarterclayton is the dynamic config manager still supported in Openshift 4? We are preparing the migration from Openshift 3.11 and I'm afraid we really need it: we have applications with many long-standing (websocket) connections. If the router reloads configuration often, old haproxy processes tend to stay around to keep those connections alive and accumulate to the point of exhausting all available memory on router nodes. The dynamic config manager alleviates the problem considerably.

Route CRD does not have a OpenAPI structural schema

apiexentions.k8s.io/v1beta1 was depreciated in K8S 1.16 and will be unavailable from K8S version 1.22 and the CRD will need to use apiexentions.k8s.io/v1.

Changing the manifest to apiexentions.k8s.io/v1 introduces a mandatory requirement for CRDs include a structural schema. https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema

I'm happy to work on adding the schema into the CRD manifest but, will need some assistance to understand the structure further.

Router Support local block timeout tunnel

I notice that Openshift support global timeout setting in env "ROUTER_DEFAULT_TUNNEL_TIMEOUT"

If the env "ROUTER_DEFAULT_TUNNEL_TIMEOUT" was be setted up , every hostname will apply the tunnel timeout time.

In my point of view the haproxy template should support local block(hostname block ) timeout tunnel.

let's revise the template as flowing as below

    {{- with $value := firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/tunneltimeout")}}
  timeout tunnel  {{$value}}
    {{- end }}

meanwhile revise haproxy/manager.go block to support this change .

Router created based on Ingress resource is not getting the defined name.

WHAT:

Unable to create the router by an ingress resource with a specific name. The following code creates the ingress object the OCP is not recognizing the name.

	ls := getAppLabels(m.Name)
	ing := &v1beta1.Ingress{
		TypeMeta: v1.TypeMeta{
			APIVersion: "extensions/v1beta1",
			Kind:       "Ingress",
		},
		ObjectMeta: v1.ObjectMeta{
			Name:      "mss-route",
			Namespace: m.Namespace,
			GenerateName: "mss-route",
			Labels:    ls,
		},
		Spec: v1beta1.IngressSpec{
			Backend: &v1beta1.IngressBackend{
				ServiceName: m.Name,
				ServicePort: intstr.FromInt(int(m.Spec.Port)),
			},
			Rules: []v1beta1.IngressRule{
				{
					Host: utils.GetAppIngress(m.Spec.ClusterHost, m.Spec.HostSufix),
					IngressRuleValue: v1beta1.IngressRuleValue{
						HTTP: &v1beta1.HTTPIngressRuleValue{
							Paths: []v1beta1.HTTPIngressPath{
								{
									Backend: v1beta1.IngressBackend{
										ServiceName: m.Name,
										ServicePort: intstr.FromInt(int(m.Spec.Port)),
									},
									Path: "/",
								},
							},
						},
					},
				},
			},
		},
	}
$ kubectl get ingress
NAME                          HOSTS                                              ADDRESS   PORTS   AGE
mss-route                     mobile-security-service-app.192.168.64.19.nip.io             80      57m

$ oc get route
NAME                                HOST/PORT                                          PATH      SERVICES                      PORT      TERMINATION   WILDCARD
mss-route-z8wjr                     mobile-security-service-app.192.168.64.19.nip.io   /         mobile-security-service-app   3000                    None

$ kubectl describe ingress mss-route
Name:             mss-route
Namespace:        mobile-security-service-operator
Address:          
Default backend:  mobile-security-service-app:3000 (172.17.0.10:3000)
Rules:
  Host                                              Path  Backends
  ----                                              ----  --------
  mobile-security-service-app.192.168.64.19.nip.io  
                                                    /   mobile-security-service-app:3000 (172.17.0.10:3000)
Annotations:
Events:  <none>

$ oc describe route mss-route-z8wjr 
Name:			mss-route-z8wjr
Namespace:		mobile-security-service-operator
Created:		8 days ago
Labels:			app=mobilesecurityservice
			mobilesecurityservice_cr=mobile-security-service-app
Annotations:		<none>
Requested Host:		mobile-security-service-app.192.168.64.19.nip.io
			  exposed on router router 8 days ago
Path:			/
TLS Termination:	<none>
Insecure Policy:	<none>
Endpoint Port:		3000

Service:	mobile-security-service-app
Weight:		100 (100%)
Endpoints:	172.17.0.10:3000

WHAT IS EXPECTED?

The OCP recognize the name and create the route with the same name which in this example is mss-route instead of add -z8wjr at the end of it.

ENVIRONMENT

  • Using Minishift: minishift v1.33.0+ba29431
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth

Server https://192.168.64.19:8443
kubernetes v1.11.0+d4cacc0

Set custom headers via annotation

The "haproxy.router.openshift.io/rewrite-target" and the "haproxy.router.openshift.io/set-forwarded-headers" are great and useful annotation, but unfortunately incomplete.

It would be really great if also the "x-forwarded-prefix" header could be set automatically (calculating the diff between the matching path and the rewrite target) or by setting the header specifically within its own annotation.

It would be awesome to be able to set any kind of additional header with an annotation.

Without, it's just a poor man's proxy function, which can't be used for applications that don't use hard-coded base URLs.

certificate key is stored in the route object instead of secret

Problem

when using route with tls termination edge/reencrypt and providing new custom certificate, the certificate+ private key is being stored in the route object itself.
and the only way to store the certificate key is in the route object

Why its bad

this is bad because any user with read only permissions on the cluster/namespace can view the private key via sinply running oc get route -o yaml and inspect all the traffic

Comparison and workaround

even kubernetes uses secrets, see here

i have tried to use kubernetes ingress instead of route, but openshift automatically creates a new route, pulls out my private key from the secret and appends it to the route object.
so i cant even workaround

Would a Pull Request for `send-proxy` support be accepted?

Morning
Would a Pull Request for send-proxy support be accepted? We want to enable it on backend servers that use

tls:
    termination: "passthrough"

I am proposing

  • haproxy.router.openshift.io/proxy_protocol: true
  • haproxy.router.openshift.io/proxy_protocol_v2: true

Suggestion: Add code owner for maintainers be automatically notified

Hi @folks,

Wdyt over add code owners in this repo? In this way, you will be notified automatically and the issues will be not lost without response. Shows interesting add other GitHub files as Pull Request Template and Issue Template as well.

The reason for the suggestion is because I opened an issue here for more than +19 days and it still without any interaction.

Issues downloading large files with low bandwidth with curl against OSD deployed router

We have encountered an issue when downloading large files with limited bandwidth with curl against OSD based api.openshift.com with HTTP2.

The problem manifests itself as follows:

podman run curlimages/curl:7.65.3 curl --limit-rate 10M -vvv 'https://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.12' -o /tmp/bla
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
97 1052M 97 1024M 0 0 9.9M 0 0:01:45 0:01:42 0:00:03 10.1M
curl: (18) transfer closed with 29821952 bytes remaining to read 

The download always stops at 1GB (1024^3 bytes exactly)

This is true with all versions of curl up to 7.68 included, however we have noticed that from curl 7.69 this seems to not happen.
With the help of RHEL curl maintainer we found out that in this version the initial window size would change from 1<<30 (which is the same as 1024^3) to 32MB (32 * 1024 * 1024)
https://bugzilla.redhat.com/show_bug.cgi?id=2166254

Surely enough, if we rate limit enough this behaviour occurs with curl 7.69 or greater

curl -A "banana" --limit-rate 538000 --output /dev/null 'https://api.openshift.com/api/as
sisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.12'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  3 1052M    3 32.0M    0     0   540k      0  0:33:13  0:01:00  0:32:13  784k
curl: (18) transfer closed with 1070009344 bytes remaining to read

And the difference it's again total_image_size - 32MB.
Interestigly, trying to find the maximum speed at which the transfer would fail, we noticed that if we're on the limit sometimes the download would fail at what it looks like to be the second window:

~ $ curl -A "banana" --limit-rate 538000 --output /dev/null 'https://api.openshift.com/api/as
sisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.12'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  4 1052M    4 48.0M    0     0   530k      0  0:33:50  0:01:32  0:32:18  501k
curl: (18) transfer closed with 1053230365 bytes remaining to read

In this case above it seems that they have negotiated a second window of half the size of the first one, but still errored at the end of the second window.

The above behaviour seems to happen only with the following factors:

  • HTTP2 (if we force HTTP1.1, this behaviour won't happen)
  • slow download of the window size (for 32MB above 1 minute, and for 1GB seemed to be triggered above 74s)
  • files need to be bigger than window size
  • only curl tested, but might happen on other client if they have similar window sizes

This does not happen on other HTTP2 servers, like cloudfront (mirror.openshift.com for example)

We have tested local plain haproxy with HTTP2, trying to reproduce the bug, but we were unable to do so

unnecessary restriction to IPs only for haproxy.router.openshift.io/ip_whitelist

For HAProxy I'd expect to be able to provide any list accepted by acl in haproxy.router.openshift.io/ip_whitelist. This includes domain names. In actual, the template fails to parse the value if a domain name is provided.

Failure produced:

log.V(7).Info("parseIPList found not IP/CIDR item", "value", ip, "err", err)

I'm going by this reference for haproxy: https://www.haproxy.com/documentation/hapee/2-4r1/configuration/acls/syntax/

Which states: "By default, when the parser can not parse an IP address, it considers that the parsed string is a domain name and tries to resolve it using DNS."

I'm presuming the haproxy used here works the same?

Upgrade Bootstrap license 503 Error page

The default 503 error page includes the Bootstrap license which has been flagged as a vulnerability as part of our pen test which our team needs to remediate.

Please update the license and code and/or remove the Bootstrap portion while keeping normalize.css.
OR
provide a way for users to override the default 503 error page

Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1967228

The Bootstrap license

/*!

Need gRPC support on route

Begin with HAProxy 1.9.2 release. The gRPC is supported by HAProxy.
But for the Route. I cannot expose an gRPC service with Route re-encrypt termination still.
So need gRPC support (HTTP/2) support on Route with re-encrypt termination

No option to preserve the session affinity cookie emitted by a server

The HAProxy ingress controller exposes the annotation haproxy-ingress.github.io/session-cookie-preserve to prevent HAProxy from overwriting a session affinity cookie written by a backend server. See: https://haproxy-ingress.github.io/docs/configuration/keys/#affinity
OpenShift's router seems to lack such configuration.
There also seems to be no way to disable the use of "indirect", which makes it impossible for a backend server to see the affinity cookie sent by the client.
This can be disabled via the HAProxy ingress controller using the annotation haproxy-ingress.github.io/session-cookie-keywords

[Feature Request] Graceful HTTP rate limiting

Currently all of haproxy.router.openshift.io/rate-limit-connections.rate-http and haproxy.router.openshift.io/rate-limit-connections.rate-tcp and haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp result in a connection drop using the tcp-request content reject directive.

While this is useful for "strict" rate limiting (i.e. against a set of malicious requests) a connection reset is not handled that gracefully from all clients (how to differentiate between a rate-limiting connection drop and a service outage). It would be nice to provide a way to configure 'graceful' rate limiting (i.e. return an HTTP 429 instead of dropping the connection). Using a fictive config option haproxy.router.openshift.io/rate-limit-connections.rate-http-soft the template would look something like:

{{- if (isInteger (index $cfg.Annotations "haproxy.router.openshift.io/rate-limit-connections.rate-http-soft")) }}
            http-request deny deny_status 429 if { sc_http_req_cnt(0) ge {{ index $cfg.Annotations "haproxy.router.openshift.io/rate-limit-connections.rate-http-soft" }} }
{{- else }}

This is mostly modeled after this part.

The usecase is allowing clients to implement a backoff mechanism, by providing them a way to programmatically differentiate between:

  • I don't want any more connections from you / service is down
  • You're making too many requests, slow down

Concatenation of key and cert causing failure

I'm attempting to use cert-manager backed by Vault to provide OpenShift 4.7.11 with a custom console certificate.
I unfortunately have come across an issue caused by the concatenation of certificate and private key which results in an unexpected format causing haproxy to fail.

The cause of this failure is due to this snippet in /var/lib/haproxy/router/certs/default.pem

-----END CERTIFICATE----------BEGIN PRIVATE KEY-----

It seems OpenShift expects there to be a linefeed
If I alter the secret base64 to include a line return after the certificate, haproxy successfully starts but since I like the certificates to be managed by cert-manager, this isn't a solution.

Router versioning

Hey there, I would like to ask you something with regards to the versioning of the router image and in context of cluster-ingress-operator. Is there a pattern one should follow when e.g. using a custom version of the router image? For example, if okd version == 4.5, should I make sure that my custom image inherits from openshift / origin-haproxy-router:4.5? Should I always make sure the versions match or can I e.g. skip if it is a minor version change?

HAProxy should be configured to compare Host header with SNI and return 421 on mismatch

Hi.

We've encountered an issue with the way HAProxy handles requests when the SNI and the Host header differ. Currently, HAProxy seems to route requests based solely on the Host header without comparing it to the SNI presented during the TLS handshake.

This can lead to requests being improperly routed when a client inadvertently (e.g. when reusing a connection for a.example.com for b.example.com assuming both use a certificate signed for *.example.com) or maliciously sends an incorrect Host header that does not match the SNI. According to RFC 7540 Section 9.1.2, when a server encounters a mismatch, it should respond with a 421 Misdirected Request error.

We propose a change like the following line to fix the issue:

http-request deny deny_status 421 if { ssl_fc_has_sni } { ssl_c_used } !ssl_sni_http_host_match

This change would enhance security and reliability by ensuring that requests are correctly routed to the intended service.

error page documentation seems wrong

When I follow the documentation here https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L106-L110 and add this to my config file:

  # To configure custom default errors, you can either uncomment the
  # line below (server ... 127.0.0.1:8080) and point it to your custom
  # backend service or alternatively, you can send a custom 503 error.
  #
  server openshift_backend my-error-page.default.svc:8080
  #errorfile 503 /var/lib/haproxy/conf/error-page-503.http

Where my-error-page.default.svc is a deployment of nginx with the error page within the cluster.

I get this error:

[ALERT] 211/184643 (22) : parsing [/var/lib/haproxy/conf/haproxy.config:39] : 'server' not allowed in 'defaults' section.
--
  | [ALERT] 211/184643 (22) : Error(s) found in configuration file : /var/lib/haproxy/conf/haproxy.config
  | [ALERT] 211/184643 (22) : Fatal errors found in configuration.

So this seems to be wrong

When I add this deployment as backend like this here https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L479 it works

 {{- end }}{{/* end range over serviceUnitNames */}}
 server my-error-page my-error-page.default.svc:8080 backup

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.