Coder Social home page Coder Social logo

jenkins-x / exposecontroller Goto Github PK

View Code? Open in Web Editor NEW
71.0 24.0 70.0 89.64 MB

Automatically expose services creating ingress rules, openshift routes or modifying services to use kubernetes nodePort or loadBalancer service types

License: Other

Makefile 5.82% Go 92.87% Shell 1.18% Dockerfile 0.13%

exposecontroller's Introduction

exposecontroller

Automatically expose services creating ingress rules, openshift routes or modifying services to use kubernetes nodePort or loadBalancer service types

Getting started

Ingress name

The ingress URL uses the service name that contains the expose annotation, if you want this to be a different name then annotate the service:

kubectl annotate svc foo fabric8.io/ingress.name=bar

Host name

The ingress URL uses the service name that contains the expose annotation, if you want the host name to be different yet be a separate ingress object:

kubectl annotate svc foo fabric8.io/host.name=bar

Multiple backend services

An ingress rule can have multiple service backends, traffic is managed using a path see https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

To add a path to the ingress rule for your service add an annotation:

kubectl annotate svc foo fabric8.io/ingress.path=/api/

Multiple service ports

A service can have multiple ports defined (e.g. 8020 and 8021). To select which port to expose add an annotation:

kubectl annotate svc foo fabric8.io/exposePort=8020

It not annotation is defined, it will pick the first port available in the list.

Internal Ingress

If you need to expose a service on an internal ingress contoller:

kubectl annotate svc foo fabric8.io/use.internal.domain: "true"

Also make sure you specify the ingress class for your internal ingress controller; say kubernetes.io/ingress.class: nginx-internal

Requirements:

  • Create an internal ingress contoller.
  • Set value for internalDomain in the exposecontoller configmap

Daemon mode

To run a one shot exposecontroller pass the --daemon flag

Cleanup

To remove any ingress rules created by exposecontroller use the --cleanup flag

Configuration

We use a Kubernetes ConfigMap and mutltiple config entries. Full list here

  • domain when using either Kubernetes Ingress or OpenShift Routes you will need to set the domain that you've used with your DNS provider (fabric8 uses cloudflare) or nip.io if you want a quick way to get running.
  • exposer used to describe which strategy exposecontroller should use to access applications
  • tls-acme (boolean) used to enable automatic TLS when used in conjunction with kube-lego. Only works with version v2.3.31 onwards.
  • tls-secret-name (string) used to enabled TLS using a pre-existing TLS secret

Automatic

If no config map or data values provided exposecontroller will try and work out what exposer or domain config for the playform.

  • exposer - Minishift and Minikube will default to NodePort, we use Ingress for Kubernetes or Route for OpenShift.
  • domain - using nip.io for magic wildcard DNS, exposecontroller will try and find a https://stackpoint.io HAProxy or Nginx Ingress controller. We also default to the single VM IP if using minishift or minikube. Together these create an external hostname we can use to access our applications.

Exposer types

cat <<EOF | kubectl create -f -
apiVersion: "v1"
data:
  config.yml: |-
    exposer: "Ingress"
    domain: "replace.me.io"
kind: "ConfigMap"
metadata:
  name: "exposecontroller"
EOF

OpenShift

Same as above but using the oc client binary

NOTE

If you're using OpenShift then you'll need to add a couple roles:

oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:default:exposecontroller
oc adm policy add-cluster-role-to-group cluster-reader system:serviceaccounts # probably too open for all setups

Label

Now label your service with expose=true in CD Pipelines or with the CLI:

kubectl label svc foo expose=true // now deprecated

or

kubectl annotate svc foo fabric8.io/expose=true

exposecontroller will use your exposer type in the configmap above to automatically watch for new services and create ingress / routes / nodeports / loadbalacers for you.

Using the expose URL in other resources

Having an external URL is extremely useful. Here are some other uses of the expose URL in addition to the annotation that gets applied to the Service

Custom annotations

You can add the hostname as a custom annotation on the service, by using the fabric8.io/exposeHostNameAs annotation:

metadata:
  annotations:
    fabric8.io/exposeHostNameAs: osiris.deislabs.io/ingressHostname

will be converted to:

metadata:
  annotations:
    fabric8.io/exposeUrl: https://example.com
    osiris.deislabs.io/ingressHostname: example.com
    fabric8.io/exposeHostNameAs: osiris.deislabs.io/ingressHostname

This is useful if you are working with tools which require access to the exposed hostname at the service level, using a specific annotation, such as Osiris.

ConfigMap

Sometimes web applications need to know their external URL so that they can use that link or host/port when generating documentation or links.

For example the Gogs application needs to know its external URL so that it can show the user how to do a git clone from the command line.

If you wish to enable injection of the expose URL into a ConfigMap then

  • create a ConfigMap with the same name as the Service and in the same namespace
  • Add the following annotations to this ConfigMap for inserting automatically values into this map when the service gets exposed. The values of these annotations are used as keys in this config map.
    • expose.config.fabric8.io/url-key : Exposed URL
    • expose.config.fabric8.io/host-key : host or host + port when port is not equal 80 (e.g. host:port)
    • expose.config.fabric8.io/path-key : Key for this ConfigMap which is used to store the path (useful for injecting as a context path into an app). Defaults to /
    • expose.config.fabric8.io/clusterip-key : Cluster IP for the service for this ConfigMap
    • expose.config.fabric8.io/clusterip-port-key : Cluster IP + port for the service for this ConfigMap
    • expose.config.fabric8.io/clusterip-port-key-if-empty-key : Cluster IP + port for the service for this ConfigMap if the value is empty
    • expose.config.fabric8.io/apiserver-key : Kubernetes / OpenShift API server host and port (format host:port)
    • expose.config.fabric8.io/apiserver-url-key : Kubernetes / OpenShift API server URL (format https://host:port)
    • expose.config.fabric8.io/apiserver-protocol-key : Kubernetes / OpenShift API server protocol (either http or https)
    • expose.config.fabric8.io/url-protocol : The default protocol used by Kubernetes Ingress or OpenShift Routes when exposing URLs
    • expose.config.fabric8.io/console-url-key : OpenShift Web Console URL
    • expose.config.fabric8.io/oauth-authorize-url-key : OAuth Authorization URL
    • expose.service-key.config.fabric8.io/foo : Exposed URL of the service called foo
    • expose-full.service-key.config.fabric8.io/foo : Exposed URL of the service called foo ensuring that the URL ends with a / character
    • expose-no-path.service-key.config.fabric8.io//foo : Exposed URL of the service called foo with any Service path removed (so just the protocol and host)
    • expose-no-protocol.service-key.config.fabric8.io/foo : Exposed URL of the service called foo with the http protocol removed
    • expose-full-no-protocol.service-key.config.fabric8.io/foo : Exposed URL of the service called foo ensuring that the URL ends with a / character and the http protocol removed
    • "jenkins-x.io/skip.tls" : if tls is enabled and the namespace level this annotation will override it for given service
    • expose.config.fabric8.io/ingress.annotations : List of annotations to add to the genarted ingress rule. List entries are seperated using a newline \n e.g. fabric8.io/ingress.annotations: "kubernetes.io/ingress.class: nginx\nfoo.io/bar: cheese"

E.g. when you set an annotation on the config map expose.config.fabric8.io/url-key: service.url then an entry to this config map will be added with the key service.url and the value of the exposed service URL when a service of the same name as this configmap gets exposed.

There is an example of the use of these annotations in the Gogs ConfigMap

Using basic auth from an Ingress controller

Ingress controllers can be configured to provide a basic auth challange on ingress rules. Jenkins X comes with and nginx ingress controller with this enabled out of the box using the default installation admin credentials. To expose an ingress rule using this basic auth challange with exposecontroller you need to add the following expose annotations to your service:

fabric8.io/ingress.annotations: "kubernetes.io/ingress.class: nginx\nnginx.ingress.kubernetes.io/auth-type: basic\nnginx.ingress.kubernetes.io/auth-secret: jx-basic-auth"

Injecting into ConfigMap entries

Sometimes you have a ConfigMap which contains an entry that is a configuration file that needs to include an expression from the exposed URL.

To do that you can add a blob of YAML in an annotation:

metadata:
  annotations:
    expose.config.fabric8.io/config-yaml: |-
      - key: app.ini
        prefix: "DOMAIN = "
        expression: host
      - key: app.ini
        prefix: "ROOT_URL = "
        expression: url
Available expressions
  • host for the host:port of the endpoint
  • url for the full URL
  • apiserver for the api server host/port
  • apiserverURL for the full api server URL

Rolling updates of Deployments

You may want your pods to be restarted if exposecontroller injects a new value into a ConfigMap. if so add the configmap.fabric8.io/update-on-change annotation on your ConfigMap with the value being the name or list of names (separated by ,) of the names of the deployments to perform a rolling upgrade on whenever the ConfigMap changes.

OAuthClient

When using OpenShift and OAuthClient you need to ensure your external URL is added to the redirectURIs property in the OAuthClient.

If you create your OAuthClient in the same namespace with the same name as your Service then it will have its expose URL added automatically to the redirectURIs

Building

  • install go version 1.7.1 or later
  • type the following:
  • when using minikube or minishift expose the docker daemon to build the exposecontroller image and run inside kubernetes. e.g eval $(minikube docker-env)
git clone git://github.com/jenkins-x/exposecontroller.git $GOPATH/src/github.com/jenkins-x/exposecontroller
cd $GOPATH/src/github.com/jenkins-x/exposecontroller

make

Running locally

Make sure you've got your kube config file set up properly (remember to oc login if you're using OpenShift).

make && ./bin/exposecontroller

Run on Kubernetes or OpenShift

  • build the binary make
  • build docker image make docker
  • run in kubernetes kubectl create -f examples/config-map.yml -f examples/deployment.yml

Rapid development on Minikube / Minishift

If you run fabric8 in minikube or minishift then you can get rapid feedback of your code via the following:

on openshift:

  • oc edit dc exposecontroller on kubernetes:

  • kubectl edit deploy exposecontroller

  • replace the fabric8/exposecontroller:xxxx image with fabric8/exposecontroller:dev and save

Now when developing you can:

  • use the kube-redeploy make target whenever you want to create a new docker image and redeploy
make kube-redeploy

if the docker build fails you may need to type this first to point your local shell at the docker daemon inside minishift/minikube:

eval $(minishift docker-env)
or
eval $(minikube docker-env)

Developing

Glide is as the exposecontroller package management

Future

On startup it would be good to check if an ingress controller is already running in the cluster, if not create one in an appropriate namespace using a nodeselector that chooses a node with a public ip.

exposecontroller's People

Contributors

carlossg avatar ccojocar avatar chmouel avatar garethjevans avatar gazal-k avatar hazim1093 avatar heroic avatar jenkins-x-bot avatar jimmidyson avatar jstrachan avatar mdiez19 avatar miguelgarcia avatar polothy avatar pow-devops2020 avatar rasheedamir avatar rawlingsj avatar rhuss avatar roxxette avatar stieler-it avatar vbehar avatar waseem-h avatar wenzlaff avatar zackpayton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

exposecontroller's Issues

Exposecontroller won't work with istio-injection=enabled & disable via annotations

Hey,

I'm trying to get the exposecontroller to ignore istio-proxy with setting
Annotations:
sidecar.istio.io/inject: "false"

The problem is that istio needs to have it set on the spec / template / annotations.
https://istio.io/docs/setup/kubernetes/sidecar-injection/#policy

The "Annotations" in values.yml just sets it on the job: https://github.com/jenkins-x/exposecontroller/blob/master/charts/exposecontroller/templates/job.yaml#L12

Would need to have another option in the values.yml to set metadata annotations on the pod template.
Like here: https://github.com/jenkins-x/exposecontroller/blob/master/charts/exposecontroller/templates/job.yaml#L20

have a startup mode of 'auto'?

to make a single YAML that can startup fabric8 on kubernetes, the main 2 parts of parameterization required are domain if folks use ingress/router and expose-rule.

I wonder if we could default expose-rule to auto which would then reuse the same logic in gofabric8 to default the mode? (e.g. try detect mini* and use nodePort etc otherwise if it spots a router / ingress-controller use that otherwise default to load-balancer).

Am just pondering how we can make fabric8 as easy to install as possible on vanilla kubernetes

Bug: Exposecontroller does not merge multiple ingress rules under the same name

Exposecontroller v2.3.82 does not expose multiple services under the same ingress name declared by fabric8.io/ingress.name annotation, i.e. fabric8.io/ingress.name=monocular:

$ kubectl get ingress monocular -o yaml

apiVersion: extensions/v1beta1                                                
kind: Ingress                                                                 
metadata:                                                                     
  annotations:                                                                
    fabric8.io/generated-by: exposecontroller                                 
    kubernetes.io/ingress.class: nginx                                        
    nginx.ingress.kubernetes.io/ingress.class: nginx                          
    nginx.ingress.kubernetes.io/rewrite-target: /                             
  creationTimestamp: 2018-11-16T07:44:18Z                                     
  generation: 4                                                               
  labels:                                                                     
    provider: fabric8                                                         
  name: monocular                                                             
  namespace: jx                                                               
  ownerReferences:                                                            
  - apiVersion: v1                                                            
    kind: Service                                                             
    name: jenkins-x-monocular-api                                             
    uid: 656f3625-e973-11e8-b900-62c85caa8f81                                 
  - apiVersion: v1                                                            
    kind: Service                                                             
    name: jenkins-x-monocular-ui                                              
    uid: 657a52da-e973-11e8-b900-62c85caa8f81                                 
  resourceVersion: "2803"                                                     
  selfLink: /apis/extensions/v1beta1/namespaces/jx/ingresses/monocular        
  uid: 6c939762-e973-11e8-b900-62c85caa8f81                                   
spec:                                                                         
  rules:                                                                      
  - host: monocular.jx.104.211.19.219.nip.io                                  
    http:                                                                     
      paths:                                                                  
      - backend:                                                              
          serviceName: jenkins-x-monocular-ui                                 
          servicePort: 80                                                     
        path: /                                                               
status:                                                                       
  loadBalancer:                                                               
    ingress:                                                                  
    - {}                                                                      

$ kubectl get cm exposecontroller -o yaml

apiVersion: v1
data:
  config.yml: |-
    exposer: Ingress
    domain: 104.211.19.219.nip.io
    http: true
    tls-acme: false
kind: ConfigMap
metadata:
  creationTimestamp: 2018-11-16T07:44:05Z
  name: exposecontroller
  namespace: jx
  resourceVersion: "2365"
  selfLink: /api/v1/namespaces/jx/configmaps/exposecontroller
  uid: 64923021-e973-11e8-b900-62c85caa8f81

$ kubectl get svc jenkins-x-monocular-ui -o yaml

apiVersion: v1                                                                            
kind: Service                                                                             
metadata:                                                                                 
  annotations:                                                                            
    fabric8.io/expose: "true"                                                             
    fabric8.io/exposeUrl: http://monocular.jx.104.211.19.219.nip.io                       
    fabric8.io/ingress.name: monocular                                                    
    fabric8.io/ingress.path: /                                                            
    jenkins-x.io/skip.tls: "true"                                                         
  creationTimestamp: 2018-11-16T07:44:06Z                                                 
  labels:                                                                                 
    chart: monocular-0.6.4                                                                
  name: jenkins-x-monocular-ui                                                            
  namespace: jx                                                                           
  resourceVersion: "2741"                                                                 
  selfLink: /api/v1/namespaces/jx/services/jenkins-x-monocular-ui                         
  uid: 657a52da-e973-11e8-b900-62c85caa8f81                                               
spec:                                                                                     
  clusterIP: 10.0.9.97                                                                    
  ports:                                                                                  
  - name: monocular-ui                                                                    
    port: 80                                                                              
    protocol: TCP                                                                         
    targetPort: 8080                                                                      
  selector:                                                                               
    app: jenkins-x-monocular-ui                                                           
  sessionAffinity: None                                                                   
  type: ClusterIP                                                                         
status:                                                                                   
  loadBalancer: {}                                                                        

$ kubectl get svc jenkins-x-monocular-api -o yaml

apiVersion: v1                                                              
kind: Service                                                               
metadata:                                                                   
  annotations:                                                              
    fabric8.io/expose: "true"                                               
    fabric8.io/exposeUrl: http://monocular.jx.104.211.19.219.nip.io         
    fabric8.io/ingress.annotations: |-                                      
      kubernetes.io/ingress.class: nginx                                    
      nginx.ingress.kubernetes.io/rewrite-target: /                         
      nginx.ingress.kubernetes.io/ingress.class: nginx                      
    fabric8.io/ingress.name: monocular                                      
    fabric8.io/ingress.path: /api/                                          
  creationTimestamp: 2018-11-16T07:44:06Z                                   
  labels:                                                                   
    chart: monocular-0.6.4                                                  
  name: jenkins-x-monocular-api                                             
  namespace: jx                                                             
  resourceVersion: "2726"                                                   
  selfLink: /api/v1/namespaces/jx/services/jenkins-x-monocular-api          
  uid: 656f3625-e973-11e8-b900-62c85caa8f81                                 
spec:                                                                       
  clusterIP: 10.0.158.127                                                   
  ports:                                                                    
  - name: monocular-api                                                     
    port: 80                                                                
    protocol: TCP                                                           
    targetPort: 8081                                                        
  selector:                                                                 
    app: jenkins-x-monocular-api                                            
  sessionAffinity: None                                                     
  type: ClusterIP                                                           
status:                                                                     
  loadBalancer: {}                                                          

$ jx version

NAME               VERSION
jx                 1.3.565
jenkins x platform 0.0.2902
Kubernetes cluster v1.9.11
kubectl            v1.12.2
helm client        v2.11.0+g2e55dbe
helm server        v2.11.0+g2e55dbe
git                git version 1.9.1

environment requirements.yaml

- alias: expose
  name: exposecontroller
  repository: http://chartmuseum.jenkins-x.io
  version: 2.3.82
- alias: cleanup
  name: exposecontroller
  repository: http://chartmuseum.jenkins-x.io
  version: 2.3.82

env values.yaml

cleanup:
  Args: 
    - --cleanup
  Annotations:
    helm.sh/hook: pre-delete
    helm.sh/hook-delete-policy: hook-succeeded
expose:
  config:
    domain: 40.121.33.227.nip.io
    exposer: Ingress
    http: "true"
    tlsacme: "false"
    pathMode: ""
  Annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: hook-succeeded

Annotate ingress?

Just a thought, is it possible to annotate the ingress with something like:

metadata:
name: nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: "nginx-ingress"

Then when deploying to GKE, you could start by declaring:

gcloud compute addresses create nginx-ingress --global

Which would give you a static IP that could be mapped to an external domain name before installing fabric8 with the declared domain.

The ingress should then pick up the static IP rather than requesting a new one.

exposecontroller report error when deploy on an existing kubenetes1.7.2

everything seems normal in kubenetes dashboard,but the exposecontroller's log report as below:

I0807 07:53:42.793884       1 exposecontroller.go:47] Using build: '2.3.2'
E0807 07:53:42.835692       1 controller.go:173] Could not discover the type of your installation: invalid character 'U' looking for beginning of value
I0807 07:53:42.835728       1 controller.go:318] starting expose controller

the nginx ingress controller logs:

I0807 07:46:45.329601       1 nginx.go:240] Writing NGINX conf to /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server_names_hash_max_size 512;


    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }

    include /etc/nginx/conf.d/*.conf;
}
I0807 07:46:45.330222       1 nginx.go:258] The main NGINX configuration file had been updated
I0807 07:46:45.330290       1 nginx.go:213] executing nginx

and it does not bind any port(80&443)

i try to view the source code,it seems to judge if the kubeclient type is OpenShift?:

func isOpenShift(c *client.Client) bool {
	res, err := c.Get().AbsPath("").DoRaw()
	if err != nil {
		glog.Errorf("Could not discover the type of your installation: %v", err)
		return false
	}

	var rp uapi.RootPaths
	err = json.Unmarshal(res, &rp)
	if err != nil {
		glog.Errorf("Could not discover the type of your installation: %v", err)
		return false
	}
	for _, p := range rp.Paths {
		if p == "/oapi" {
			return true
		}
	}
	return false
}

please give me a faver,thank you very much.

User "system:serviceaccount:jx:expose" cannot list nodes at the cluster scope

jx install returns Error: Job failed: BackoffLimitExceeded

kubectl logs jobs/expose
F0321 12:47:02.256736       1 exposecontroller.go:175] failed to create new strategy: failed to create ingress expose strategy: failed to get a domain: failed to find any nodes: nodes is forbidden: User "system:serviceaccount:jx:expose" cannot list nodes at the cluster scope

Giving the sa/expose node permissions I could reinstall but just to fail later with

F0321 12:57:26.751324       1 exposecontroller.go:175] failed to create new strategy: failed to create ingress expose strategy: failed to get a domain: no known automatic ways to get an external ip to use with nip.  Please configure exposecontroller configmap manually see https://github.com/jenkins-x/exposecontroller#configuration

so I reinstalled again with jx install --domain mydomain.example.com and then it succeeded, should have been clearer

Not exposing services via Ingress in Kubernetes cluster

I am trying to deploy fabric8 to a Kubernetes cluster running on EC2 instances. Based on the documentation, exposecontroller should default to using Ingress for exposing services in Kubernetes. For some reason, it keeps using NodePort, so the exposecontroller container fails to run because I have multiple nodes in the cluster. The error log from the stopped container is: [exposecontroller.go:66] failed to create new strategy: failed to create node port expose strategy: node port strategy can only be used with single node clusters - found 2 nodes

I even tried explicitly setting the --ingress=true flag in the gofabric8 deploy command, but the issue is the same. Not sure what is going on. Any thoughts on the root cause of this would be greatly appreciated.

unable to configure ingress mode to use any host (empty host field=*)

I'm trying to use Kubernetes default Ingress configuration as described here: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80

With this configuration I am able to use kubernetes with ingress on a local kubernetes cluster with Windows, redirecting any call on port 80 to the target service using a specific subpath.

To achieve this, I need to change the default configuration generated by exposecontroller:

kind: Ingress
metadata:
  creationTimestamp: 2018-09-25T21:12:08Z
  generation: 1
  labels:
    provider: fabric8
  name: geo-sample-jpa
  namespace: default
  resourceVersion: "593243"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/geo-sample-jpa
  uid: a975e224-c107-11e8-b1de-00155d039259
spec:
  rules:
  - host: geo-sample-jpa.default.nip.io
    http:
      paths:
      - backend:
          serviceName: geo-sample-jpa
          servicePort: 8080
status:
  loadBalancer:
    ingress:
    - hostname: localhost

The change consists in disabling the host attribute to use it on any host.

When looking at the code I cannot find a way to do it: https://github.com/jenkins-x/exposecontroller/blob/master/exposestrategy/ingress.go

I believe it is the basic usage for kubernetes ingress and I wonder why it is not provided by exposecontroller.

Could you please give me a feedback on how I can achieve this ?
Thanks,
Ben.

Service annotation does not work. Labelling does.

README mentions that kubectl label svc foo expose=true is now deprecated. However, when I try to use annotations ingress is not created.
Labelling works as expected, but I want to be able to set my own ingress URL, for which, I understand, I need to use annotations.

~ ❯❯❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0+coreos.0", GitCommit:"a65654ef5b593ac19fbfaf33b1a1873c0320353b", GitTreeState:"clean", BuildDate:"2017-09-29T21:51:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Ingress controller: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0

Please advise.

allow kubernetes.io/externalIP to be loaded from the ConfigMap too?

I'm experimenting turning exposecontroller into a stand alone YAML that can be run directly on openshift or kubernetes (or helm) then configured later on via a ConfigMap if required.

I'm just about there but right now the node has to be annotated with "kubernetes.io/externalIP" or the exposecontroller barfs with:

2016-09-11 06:47:34.944279 I | Unable to find kubernetes.io/externalIP label, was gofabric8 used to deploy?

which is a nice helpful message BTW! I'm wondering if for openshift templates, we could supply this externalIP address as a template parameter so folks can specify it via oc process etc?

e.g. so if gofabric8 isn't used and the node doesn't have the annotation, we fall back to looking for a value in the ConfigMap (which we can configure manually via oc or default via an openshift template parameter?)

Ingress does not update when Service is updated

Problem

Ingress is not updated when there is a change in the service.

How to Reproduce

Simply add an annotation to an existing service which is to be propagated to the ingress, and it won't work as expected. You will have to delete the ingress manually so that it is recreated by expose controller.

wrong proxy_pass?

Hi,

hopefuly this is the right repo for my issue.

I have a kubernetes cluster 1.3.6 running on CoreOS stable 1122.2

and a wildcard dns lab2.dev.xxx.de pointing the k8s-master. The k8s master runs with status Ready,SchedulingDisabled and 1 minion with Status Ready

according the docu i try to install fabric8 with this command

gofabric8 deploy -d lab2.dev.xxx.de -y

After installer i got this output

Opening URL http://fabric8.default.lab2.dev.xxx.de

But i can no reach this url.

when i try to debug this:

$ kubectl --namespace fabric8-system exec ingress-nginx-3146896821-9grze cat /etc/nginx/conf.d/default-fabric8.conf

upstream default-fabric8-fabric8.default.lab2.dev.xxx.de-fabric8 {

    server 10.2.79.49:9090;
}


server {
    listen 80;



    server_name fabric8.default.lab2.xxx.de;

    location / {
        proxy_http_version 1.1;

        proxy_connect_timeout 10s;
        proxy_read_timeout 10s;
        client_max_body_size 2000m;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://default-fabric8-fabric8.default.lab2.dev.xxx.de-fabric8;
    }
}

Clould it be that there is a wrong generated proxy_pass String? http://default-fabric8-fabric8.default.lab2.dev.xxx.de-fabric8
imho the proxy_pass should be the internal DNS http://fabric8.default in the cluster i am able to reach that dns from ingress container.

greetings
Andre

expose ingresses not picking up annotations

I've got three different ingresses being created using exposecontroller (all in https://github.com/ryandawsonuk/environment-ferretmaze-staging/blob/e64d91e6c0072b1395736e0f0447bfe1064174ea/env/values.yaml#L32) and for each of these ingress when I do kubectl describe ingress the annotations section is empty
I may well have made a mistake in the yaml somewhere but I'm surprised that it's happening for all three.
It is working for a colleague of mine with almost exactly the same yaml. The only other differences being that I'm on AWS and he's on GCP and my jx is a little bit more up to date (jenkins x platform 0.0.1677).

The annotations are present on the services. But not on the ingress.

Struggling to think of what I can do to get to the bottom of it. Any suggestions would be appreciated.

support multiple destination keys for injecting service URLs

I'd like to be able to use this annotation in fabric8-platform for kubernetes on the fabric8 ConfigMap:

    expose-full.service-key.config.fabric8.io/f8tenant: keycloak.url,f8tenant.api.url

which would find the external URL of the f8tenant service and then inject it into the 2 ConfigMap.data entries for - keycloak.url and f8tenant.api.url.

i.e. in pseudo code it'd do

configMap.data["keycloak.url"] = f8externalUrl
configMap.data["f8tenant.api.url"] = f8externalUrl

So it'd be good to modify the exposecontroller to split the value of this annotation using "," into possibly multiple values
e.g.
here:
https://github.com/fabric8io/exposecontroller/blob/91c18e7a0ba237f4d24b4230731a44573b6b75c9/controller/controller.go#L376
and
https://github.com/fabric8io/exposecontroller/blob/91c18e7a0ba237f4d24b4230731a44573b6b75c9/controller/controller.go#L389

so that it'd iterate over the values (rather than assume a single key value)

Change how the default route domain is detected on OpenShift

The code exposestrategy/route.go attempts to lookup the default domain via getAutoDefaultDomain if one isn't explicitly given and use that to create a hostname for the generated route on OpenShift. This requires cluster-wide admin privileges to do.

Instead of doing this, I propose we pass a blank hostname when creating the RouteSpec if no explicit domain is given and OpenShift will automatically generate one for us that includes the default domain. We can then retrieve the hostname from the generated route for use with addServiceAnnotation. The end result will be the same but this latter method does not require cluster-wide admin privileges.

expose not finding services using LoadBalancer strategy

I've been deploying this chart to a 1.9.7-gke.3 cluster using helm install ./activiti-cloud-full-example-expose. The strategy is LoadBalancer. The exposecontroller runs but it doesn't populate the configmap and the pod logs for the expose job indicate that it didn't find my services:

image

The services are annotated to be exposed:

image

A colleague also reproduced this issue. They then took the same chart and switched it to use Ingress with a specified domain and then they were able to get expose to populate the configmap. But we're really looking for a way to not have to specify the domain. Ingress would be fine but using Ingress strategy without setting a domain the expose job gives the error failed to create ingress expose strategy: failed to get a domain

support annotations as well as a label

it feels cleaner instructing exposecontroller to create ingress rules / routes etc using an annotation on a service rather than a label. Let's check for an annotation on a service and deprecate the use of label.

support adding the expose URL to OAuthClient

on OpenShift we need to put the externalUrl of a service into OAuthClients so that the OAuth redirect can go back to the service. e.g. logging in with the fabric8 console.

Right now we've some magic in gofabric8. It'd be nice to be able to run fabric8 on openshift with the minimum possible magic required (e.g. gofabric8). e.g. so that we can install fabric8 via the openshift console itself.

So it'd be nice when exposecontroller is running on OpenShift to also watch OAuthClient which have the same label (say expose: true?) then whenever the service gets annotated, we also check if there's an OAuthClient of the same name with that label, if so we ensure that the externalUrl is inside the list of redirects.

e.g. when using ingress, route or nodePort - its hard at install time to know what the serviceURL is gonna be when the OAuthClient gets created. So its cleaner for exposecontroller to add the serviceURL when it adds the exposeUrlannotation

Handle multi-ports Service

Can it be possible to use expose on a service which have multiple ports specs ?

I think there will be an issue with the fabric8.io/exposeUrlservice's annotation which can only contains one value.

An implementation can be to use the "Simple fanout" method described in k8s doc where the path is the port's name.

strip the namespace prefix from the ingress host?

when using helm we need to give each release a unique global name. So on Jenkins X we've been using "$namespace-$appName" as the name.

This leads to duplication in ingress names; they end up as http://jx-staging-myapp.jx-staging.mydomain which is a bit sucky.

It'd be nice to have a flag in exposecontroller we could enable by default that strips the namespace prefix from the host names in the generated ingresses so they appear as the more natural: http://myapp.jx-staging.mydomain

ConfigMap is not created on custom environments and own domain

jx install --no-default-environments --domain wildcard.example.com on a brand new AKS cluster, followed by for example jx create environment canary --git-provider-url=https://bitbucket.org --pull-secrets='private-registry-secret' does not create a ConfigMap entry for exposecontroller in the new environment's namespace. When deploying, this results in this error:

exposecontroller.go:194] failed to create new strategy: failed to create auto expose strategy: failed to get a domain: no known automatic ways to get an external ip to use with nip.  Please configure exposecontroller configmap manually see https://github.com/jenkins-x/exposecontroller#configuration

I had to manually add the ConfigMap:

apiVersion: "v1"
kind: "ConfigMap"
metadata:
  labels:
    provider: "fabric8"
    project: "exposecontroller"
  name: "exposecontroller"
  namespace: jx-canary
data:
  config.yml: |-
    domain: wildcard.example.com
    exposer: Ingress

Also the expose ServiceAccount did not have the correct permissions...

Allow static mapping from exposecontroller

As long as direct etcd configuration allows to map any service to any sub-domain it worth to add static mappings to exposecontroller.
Probably something like

label:
  "fabric8.io/exposecontroller/static-map-root": "true"
  "fabric8.io/exposecontroller/static-map-subdomain": "auth"
  "fabric8.io/exposecontroller/static-map-subdomain": "docs.api"

switch to using the data entries in the ConfigMap directly - to avoid embedding a YAML file in a single data entry

now we've CLI arguments for exposecontroller we should probably move away from using the config.yml inside the ConfigMap and instead just use the envFrom mechanism in kubernetes and load the configuration from CLI arguments or env vars?

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-pod-environment-variables

using Fabric8 with Rancher

i have a k8s cluster created with 2 hosts using rancher. i use this command:
kubectl create -f http://central.maven.org/maven2/io/fabric8/platform/packages/console/2.4.3/console-2.4.3-kubernetes.json --validate=false

the exposecontroller continues to crash with this error.

11/10/2016 8:20:17 PMERR: 2016/11/11 02:20:17.511864 Error forwarding to https://kubernetes:443/api/v1/namespaces, err: dial tcp 10.42.38.194:443: getsockopt: connection refused

in my AWS Security groups i have opened all possible ports that i can think of.

we would greatly appreciate some insight. we have used stackpoint and see that it is able to deploy to AWS just fine. we modeled our security group access after them.

what am i missing?

Making SSL enabling at ingress/service level

@jstrachan As of today, expose controller config in the environment can be configured tls-acme true or false.
If tls-acme set true, all the services in the environment by default has tls enabled. And we can disable ssl for specific using jenkins.io/skip-tls flag.

What about the other case, where we dont need to enable it on all service , but for specific service you should able to enable. In my case, we have selected application in the env which runs on https(say the front end services) and then there few exposed services which need not be in https. So i want to turn on ssl only for very few services.

IS there any suggested way for it, instead of enabling it for the whole environment

Kube-lego with GKE load balancer

When using the native GKE load balancer in combination with kube-lego. The expose controller generated ingress should contain an extra annotation:

kubernetes.io/ingress.class: gce

When kube-lego detects this annotation it enhances the given ingress with an extra path mapping leading to the the lego-nginx backend enabling the domain check.

Adding the annotation by hand triggers kube-lego as well. But after some time the expose controller overwrites the lego path rule with it's own base rule; effectively disabling TLS again.

Is there an option to have the expose controller apply the extra ingress class annotation and prevent it from overwriting the lego acme path extension?

fabric8.io annotation not being added to ingress

Ive added the line fabric8.io/ingress.annotations: "nginx.ingress.kubernetes.io/force-ssl-redirect: true" in the annotations section for a service. However, this annotation is added to the service annotations rather than ingress annotations. Is this behaviour expected? If yes, is there any way to write annotations for ingress through values.yaml?
The other option I have is that I turn off expose controller and I write my own ingress file.

Failed to create ConfigMap: exposecontroller

Hello,
I'm trying to run fabric8 on an openshift deployment on an AWS RHEL 7 instance. I ran into many problems using the Openshift official guides...

I was able to get very close by using the "Alternative Installation Route" in the following guide:
https://fabric8.io/guide/getStarted/installOpenShift.html

I tried installing one of the newer binaries but ran into more issues. So I was able to get openshift working using the curl example in the guide, which uses Openshift v1.1.1

Gofabric8 deploys fine until it gets to persistent volumes and the exposecontroller.

I was able to fix the persistence issue by running $ gofabric8 volumes after the initial deployment.

The deployment fails when it tries to create the ConfigMap, and exposecontroller:

Processing resource kind: ConfigMap in namespace default name exposecontroller
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name fabric8
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name fabric8-environments
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name fabric8-forge
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Found namespace on kind ConfigMap of user-secrets-source-adminProcessing resource kind: ConfigMap in namespace user-secrets-source-admin name fabric8-git-app-secrets
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name fabric8-platform
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name gogs
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: ConfigMap in namespace default name jenkins
Failed to create ConfigMap: Failed to create ConfigMap: 0 the server could not find the requested resource
Processing resource kind: DeploymentConfig in namespace default name configmapcontroller
Failed to create DeploymentConfig: Failed to create DeploymentConfig: 0 deploymentConfig "configmapcontroller" already exists
3:25 PM

platform......................................................................✔
Failed to create ConfigMap: exposecontroller..................................✘ the server could not find the requested resource

I tried creating a ConfigMap to give it a resource using:
$ oc create configmap exposecontroller

However, v1.1.1 of openshift doesn't seem to recognize that syntax.
It keeps saying I need to use $ oc create -f (which I can't get to work correctly)
I try to feed it files from the openshift.local.config/, openshift.local.etcd/, and openshift.local.volumes/ directories, but get no success.

Here's the current state of openshift and fabric8:
$ oc get pods
NAME READY STATUS RESTARTS AGE
configmapcontroller-1-deploy 0/1 ContainerCreating 0 3h
exposecontroller-1-deploy 0/1 ContainerCreating 0 3h
fabric8-1-deploy 0/1 ContainerCreating 0 3h
fabric8-docker-registry-1-deploy 0/1 ContainerCreating 0 3h
fabric8-forge-1-deploy 0/1 ContainerCreating 0 3h
gogs-1-deploy 0/1 ContainerCreating 0 3h
jenkins-1-deploy 0/1 ContainerCreating 0 3h
nexus-1-deploy 0/1 ContainerCreating 0 3h

$gofabric8 validate
███████
▄▄▄▄▄▄▄ ▄▄▄▄▄▄█
███████ ▀▀▀▀▀▀▀ ▄▄▄▄▄▄
▄▄▄▄▄▄▄ ██████
▄▄▄▄▄ ▄▄▄▄▄▄
▀▄▄▄▄▄ ▄▄▄ ▄▄▄
▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▀
▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄█▄▄▀
▀▀▄█▄▄▄▄▄▄▄▄▄▄▄▄▄▀
▄█▄▄▄█▄▄▄█▄▄▄
▄█▄▄▄▄▄▄▄▄▄▄▄
▄▄█▄▄▄▄▄▄▄▄
▄▄▄▄▀▀▄█▄
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Validating your OpenShift installation at https://$HOSTIP in namespace default

Service account...............................................................✔
Console.......................................................................✔
Templates.....................................................................✔
SecurityContextConstraints....................................................✔
PersistentVolumeClaims........................................................✔
ConfigMaps....................................................................✘ the server could not find the requested resource

Configmap Annotaitons - Not updated

when we deploy an configmap along with a service and use the annotations expose.config.fabric8.io/url-key - and if there is a delay in the expose of the service the value of the key is set to IP:0 and its observed that the map never gets updated even the service is available after some time

Environment:

gofabric8, version 0.4.112 (branch: 'master', revision: '50d5d75') build date: '20161202-12:52:33' go version: '1.7.3'
minikube version: v0.13.1

Reproduce:

You can use the following projects to reproduce the issue:

https://github.com/kameshsampath/keycloak-demo-server - this is will deploy the keycloak server and creates a configmap keycloak-demo-server, post deployment if you do kubectl get configmap keycloak-demo-server -o yaml you will see the following output "keycloak-demo-server--host-ip" and as IP:0

Because of this the ENV Variable used in this application https://github.com/kameshsampath/springboot-keycloak-demo is pointing to wrong value.

I observed that after a long time the values of the configmap keycloak-demo-server get set to right value.

should we synchronize it correctly? and make the dependent application come up with right values ?

Or Is there something we need to do at application level ?

exposecontroller does not add exposeUrl

When I use the command line to annotate a service with fabric8.io/expose=true, the controller works as expected and adds theexposeUrl annotation.

The controller does not annotate the service with exposeUrl when:

  1. the service that is being deployed already has the fabric8.io/expose=true annotation. I have to delete the annotation and re-annotate the service using the command line;
  2. kubectl annotate is used with --overwrites;
  3. the service has been deployed by fabric8 maven plugin and contains an "expose: true" label. I must first remove the label and use the command line to annotate it with fabric8.io/expose=true.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.