Coder Social home page Coder Social logo

rio's Introduction

Rancher

This file is auto-generated from README-template.md, please make any changes there.

Build Status Docker Pulls Go Report Card

Rancher is an open source container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.

Latest Release

  • v2.8
    • Latest - v2.8.3 - rancher/rancher:v2.8.3 / rancher/rancher:latest - Read the full release notes.
    • Stable - v2.8.3 - rancher/rancher:v2.8.3 / rancher/rancher:stable - Read the full release notes.
  • v2.7
    • Latest - v2.7.10 - rancher/rancher:v2.7.10 - Read the full release notes.
    • Stable - v2.7.10 - rancher/rancher:v2.7.10 - Read the full release notes.
  • v2.6
    • Latest - v2.6.14 - rancher/rancher:v2.6.14 - Read the full release notes.
    • Stable - v2.6.14 - rancher/rancher:v2.6.14 - Read the full release notes.

To get automated notifications of our latest release, you can watch the announcements category in our forums, or subscribe to the RSS feed https://forums.rancher.com/c/announcements.rss.

Quick Start

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher

Open your browser to https://localhost

Installation

See Installing/Upgrading Rancher for all installation options.

Minimum Requirements

  • Operating Systems
    • Please see Support Matrix for specific OS versions for each Rancher version. Note that the link will default to the support matrix for the latest version of Rancher. Use the left navigation menu to select a different Rancher version.
  • Hardware & Software

Using Rancher

To learn more about using Rancher, please refer to our Rancher Documentation.

Source Code

This repo is a meta-repo used for packaging and contains the majority of Rancher codebase. For other Rancher projects and modules, see go.mod for the full list.

Rancher also includes other open source libraries and projects, see go.mod for the full list.

Build configuration

Refer to the build docs on how to customize the building and packaging of Rancher.

Support, Discussion, and Community

If you need any help with Rancher, please join us at either our Rancher forums or Slack where most of our team hangs out at.

Please submit any Rancher bugs, issues, and feature requests to rancher/rancher.

For security issues, please first check our security policy and email [email protected] instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.

License

Copyright (c) 2014-2024 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

rio's People

Contributors

aemneina avatar apollo13 avatar cbron avatar cflewis avatar chris-scentregroup avatar curx avatar davidnuzik avatar daxmc99 avatar deas avatar dependabot[bot] avatar detiber avatar drnic avatar gliptak avatar ibuildthecloud avatar izaac avatar jjasghar avatar kevgo avatar lalyos avatar lucidprogrammer avatar marcinkoziej avatar markbennett avatar matti avatar rancher-max avatar shteou avatar sokoow avatar sosiska avatar strongmonkey avatar tfiduccia avatar utwo avatar vincent99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rio's Issues

Can't set ready or health check's "unhealthyThreshold"

Steps:

  1. rio run -n tstk/tsrv1 --ready-cmd 'echo hello' --unready-retries 4 nginx
  2. rio inspect tstk/tsrv1

Results: readycheck's unhealthyThreshold is still set to 3 instead of 4. The unhealthyThreshold for Health check also does not work. "--unhealthy-retries"

hide enableAutoScale field in inspect service/stack

Version - v0.0.4-rc5

Steps:

  1. rio run -n stack1/service1 nginx
  2. rio inspect stack1
  3. rio inspect stack1/service1

Results: In the results for both the stack and the service inspect there is a field called enableAutoScale. This should be hidden.

When running the weight command I get an error in logs

Steps:

  1. Create a centos machine
  2. rio run -p 80/http --name test/srvmesh ibuildthecloud/demo:v1
  3. rio stage --image=ibuildthecloud/demo:v3 test/srvmesh:v3
  4. rio weight test/srvmesh:v3=50%
  5. Check server log

Results: Everytime I run weight command I get an error in logs: ERRO[10559] VirtualServiceController test-124a4837/svrmesh [gateway-controller] failed with : gateways.networking.istio.io "external" already exists

When adding config file to service, stuck in updating

Steps:

  1. Create a config name testconfig in rio
  2. rio run -n tsk/tservice --config testconfig:/temp nginx

Results:
Service is stuck in updating

Comment:
This happened because I didn't put the config in the same stack as the service. It would be nice if there was some sort of way to tell user that the config wasn't found.

inspect domain doesn't work

Steps:

  1. rio run -p 80/http -n tsk1/tsvr1 nginx:latest
  2. rio domain add test.foo.bar tsk1/tsvr2
  3. rio inspect test.foo.bar

Results - nothing happens. I expect I should inspect a domain somehow but I'm not sure how I should be able to do that.

While being promoted or weight added, service seems to be down

Steps:

  1. Create a centos machine
  2. rio run -p 80/http --name test/srvmesh ibuildthecloud/demo:v1
  3. rio stage --image=ibuildthecloud/demo:v3 test/srvmesh:v3
  4. rio weight test/srvmesh:v3=50%
  5. rio ps and grab endpoint
  6. Curl the endpoint

Results: In centos especially, I notice a huge lag in the return of the endpoint and sometime it just returns nothing. After a while it will start returning "hello world" text

On Docker Mac build, connection failed for service mesh

Steps:

  1. rio run -p 80/http --name test/srvmesh ibuildthecloud/demo:v1
  2. rio stage --image=ibuildthecloud/demo:v3 test/srvmesh:v3
  3. rio weight test/srvmesh:v3=90%
  4. rio ps
  5. Grab endpoint
  6. In terminal : curl -v

Results: Connection failed - Network is unreachable

Create a separate resource for staged service

Right now staged service always have the same create time as the original service:

sheng@ubuntu-sheng:~/Downloads$ rio ps
NAME                 IMAGE     CREATED         SCALE     STATE     ENDPOINT                                                    DETAIL
epic-bhaskara        nginx     4 minutes ago   1         active    http://epic-bhaskara.default.u2u8hs.lb.rancher.cloud        
epic-bhaskara:next   nginx     4 minutes ago   1         active    http://epic-bhaskara-next.default.u2u8hs.lb.rancher.cloud   

It's better to create a new resource and leave the existing service untouched.

Default for --ready commands are not set to 0 like it says in help

Steps:

  1. rio run -n tstk/tsrv1 --ready-cmd 'echo hello' nginx
  2. rio inspect tstk/tsrv1

Results: healthyThreshold (--ready-retries) is 1, intervalSeconds (--ready-interval) is 10, timeoutSeconds (--ready-timeout) is 5. In help they all say they are set to 0 by default. Should we change the help or fix the commands so they default to 0?

run --restart only accepts "always"

Steps:

  1. rio run -n stack1/service1 --restart "never" nginx

Results: Service won't create because only "Always" is supported. I thought this could also be "never" and "onFailure"
failed to create stack1-072a967f/service3 apps/v1beta2, Kind=Deployment for stack-service stack1-072a967f/service3: Deployment.apps "service3" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"

rio run --cidfile should be changed or dropped

Steps:

  1. rio run -h

Results: Under options:
--cidfile value Write the container ID to the file

Comment: This doesn't make sense in this current context so we need to either drop it or change it.

run --pid option not being set in kubernetes

Version - v0.0.4-rc3

Steps:

  1. rio run -n tsk/tser2 --pid=host nginx
  2. rio inspect tsk/tser2 (notice that pid = host)
  3. rio kubectl get -n 'namespace' -o=json deploy/tser2

results: No pid field is set

On very first start-up, there are no ports so build fails

Version - v0.0.3-rc1
Steps:

  1. Build rio on a fresh machine with no previous rio installed

Results: Build fails cause there are no ports and lb doesn't allow for that.
ERRO[2896] StackController /default-265db573 [stack-deploy-controller] failed with : waiting for cluster domain
ERRO[3016] VirtualServiceController all [gateway-controller] failed with : Service "rio-lb" is invalid: spec.ports: Required value

rio run --update-strategy on-delete command throws k8s error

Steps:

  1. rio run -n stack/service --update-strategy on-delete nginx
  2. rio ps

Results: failed to create stack-3dfb19fa/service apps/v1beta2, Kind=StatefulSet for stack-service stack-3dfb19fa/service: StatefulSet.apps "service" is invalid: spec.podManagementPolicy: Invalid value: "RollingUpdate": must be 'OrderedReady' or 'Parallel'

Weight can't go to 0

Steps:

  1. rio run -p 80/http --name test/srvmesh ibuildthecloud/demo:v1
  2. rio stage --image=ibuildthecloud/demo:v3 test/srvmesh:v3
  3. rio weight test/srvmesh:v3=50%
  4. rio weight test/srvmesh:v3=0%
  5. rio export test

Results: Weight stays at 50%, 0% is ignored

Add endpoint to service object

Steps:

  1. rio run -p 80/http -n test/psrv nginx
  2. rio inspect test/psrv

Results: the endpoint isn't in the service object. Be nice to be able to grab it from there for automation.

endpoint not connecting on service created with port

Version - v0.0.3-rc6
Steps:

  1. rio run -p 80/http --name test/svc --scale=3 ibuildthecloud/demo:v1
  2. rio ps
  3. Grab the endpoint once the state of service becomes active
  4. curl -v

results:
curl: (7) Failed to connect to port 80: Connection refused

Error message when creating the first service

Rio version: 0.0.2

Just start rio server and then run:

root@ubuntu-sheng:/home/sheng# rio run -p 80/http --name demo ibuildthecloud/demo:v1
default-265db573:demo
root@ubuntu-sheng:/home/sheng# rio ps
NAME      IMAGE                    CREATED         SCALE     STATE     ENDPOINT                                     DETAIL
demo      ibuildthecloud/demo:v1   5 seconds ago   1         pending   http://demo.default.192.168.179.138.nip.io   failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"

The server log immediately shows the following errors. The service eventually did startup and the error seems to fix itself.

root@ubuntu-sheng:/home/sheng# rio server
INFO[0000] Starting Rio v0.0.2                          
INFO[0010] Creating CRD gateways.networking.istio.io    
INFO[0010] Creating CRD virtualservices.networking.istio.io 
INFO[0010] Waiting for CRD gateways.networking.istio.io to become available 
INFO[0011] Done waiting for CRD gateways.networking.istio.io to become available 
INFO[0011] Waiting for CRD virtualservices.networking.istio.io to become available 
INFO[0011] Done waiting for CRD virtualservices.networking.istio.io to become available 
INFO[0011] Creating CRD listenconfigs.space.cattle.io   
INFO[0011] Creating CRD services.rio.cattle.io          
INFO[0011] Creating CRD configs.rio.cattle.io           
INFO[0011] Waiting for CRD listenconfigs.space.cattle.io to become available 
INFO[0011] Creating CRD routesets.rio.cattle.io         
INFO[0011] Creating CRD volumes.rio.cattle.io           
INFO[0011] Creating CRD stacks.rio.cattle.io            
INFO[0012] Done waiting for CRD listenconfigs.space.cattle.io to become available 
INFO[0012] Listening on :7443                           
INFO[0012] Listening on :7080                           
INFO[0012] Client token is available at /var/lib/rancher/rio/server/client-token 
INFO[0012] Node token is available at /var/lib/rancher/rio/server/node-token 
INFO[0012] To use CLI: rio login -s https://192.168.179.138:7443 -t R1098060a828aaa2a1cc6406e4745217fd94fbcc7f9dffc180ffbb850cd075eb0f8::admin:d4cc5d62dd2416bdea79012306955ff4 
INFO[0012] To join node to cluster: rio agent -s https://192.168.179.138:7443 -t R1098060a828aaa2a1cc6406e4745217fd94fbcc7f9dffc180ffbb850cd075eb0f8::node:ea907929e34869905d12893310e0e116 
INFO[0013] Agent starting, logging to /var/lib/rancher/rio/agent/agent.log 
INFO[0013] 2018/08/14 19:12:18 http: TLS handshake error from 127.0.0.1:38880: remote error: tls: bad certificate 
INFO[0021] 2018/08/14 19:12:26 http: TLS handshake error from [::1]:54706: remote error: tls: bad certificate 
INFO[0021] Handling backend connection request [ubuntu-sheng] 
INFO[0027] 2018/08/14 19:12:32 http: TLS handshake error from 127.0.0.1:38966: remote error: tls: bad certificate 
ERRO[0062] PodController istio-095b8502/istio-gateway-868ff87668-bg4rt [pod-controller] failed with : CreateDomain: failed to execute a request: Post http://api.lb.rancher.cloud/v1/domain: dial tcp 52.8.238.187:80: i/o timeout 
ERRO[0092] PodController istio-095b8502/istio-gateway-868ff87668-bg4rt [pod-controller] failed with : CreateDomain: failed to execute a request: Post http://api.lb.rancher.cloud/v1/domain: dial tcp 52.8.238.187:80: i/o timeout 
ERRO[0122] PodController istio-095b8502/istio-gateway-868ff87668-bg4rt [pod-controller] failed with : CreateDomain: failed to execute a request: Post http://api.lb.rancher.cloud/v1/domain: dial tcp 52.8.238.187:80: i/o timeout 
INFO[0128] 2018/08/14 19:14:13 http: TLS handshake error from 127.0.0.1:39056: remote error: tls: bad certificate 
ERRO[0130] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0130] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
ERRO[0131] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0131] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
INFO[0132] 2018/08/14 19:14:17 http: TLS handshake error from 127.0.0.1:39070: remote error: tls: bad certificate 
ERRO[0133] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0133] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
ERRO[0136] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0136] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
ERRO[0141] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0141] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
ERRO[0150] Failed to apply error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  host: demo
  subsets:
  - labels:
      rio.cattle.io/revision: latest
    name: latest

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    rio.cattle.io/ports: "80"
  creationTimestamp: null
  labels:
    apply.cattle.io/generationID: "0"
    apply.cattle.io/groupID: stackdeploy-mesh-default-265db573
    rio.cattle.io: "true"
    rio.cattle.io/namespace: default-265db573
    rio.cattle.io/revision: latest
    rio.cattle.io/service: demo
  name: demo
  namespace: default-265db573
spec:
  gateways:
  - mesh
  - external.rio-system.svc.cluster.local
  hosts:
  - demo
  - demo.default.192.168.179.138.nip.io
  http:
  - match:
    - gateways:
      - mesh
      - external.rio-system.svc.cluster.local
      port: 80
    route:
    - destination:
        host: demo
        port:
          number: 80
        subset: latest
      weight: 100
 
ERRO[0150] StackController /default-265db573 [stack-deploy-controller] failed with : failed to apply: error: no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
 
ERRO[0152] PodController istio-095b8502/istio-gateway-868ff87668-bg4rt [pod-controller] failed with : CreateDomain: failed to execute a request: Post http://api.lb.rancher.cloud/v1/domain: dial tcp 54.193.239.2:80: i/o timeout 

Change info for --network in rio run -h

Steps:

  1. rio run -h

Results: There is a option for --network. Only two options are available for this, default & home. We need to change the help and validate that in CLI

Ports not working for services

Steps:

  1. rio run -p 80/http --name test/svc --scale=3 ibuildthecloud/demo:v1
  2. Wait for service to be active
  3. Curl -s "endpoint"

Results: No response

rio scale is not immediately setting status to "updating"

Steps:

  1. rio scale service=10; rio ps

Results: rio status is set to active. There is a delay before it's being set to "updating".

Comment: I kinda need this fixed for scale validation tests. In scale.py, kube test keeps failing cause wait doesn't wait. I put in a sleep, but as soon as wait is fixed should be able to take it out.

rio ps returns error after I remove service

Steps:

  1. Create 4 stack and put services in all of them
  2. rio rm (remove all 4 stacks in a row), but leave the default alone
  3. rio ps

results: "FATA[0000] template: :1:24: executing "" at <.Stack.Name>: can't evaluate field Name in type *client.Stack "

NAME        STATE     CREATED          DESC      DETAIL
stack       active    3 seconds ago              
teststack   active    28 seconds ago             
test        active    25 hours ago               
default     active    7 days ago                 
venus:~ tani$ rio rm stack
default:stack
venus:~ tani$ rio rm teststack
default:teststack
venus:~ tani$ rio rm test
default:test
venus:~ tani$ rio ps
FATA[0000] template: :1:24: executing "" at <.Stack.Name>: can't evaluate field Name in type *client.Stack ```

Weight percentage is not working properly

Steps:

  1. rio run -p 80/http --name test/srvmesh ibuildthecloud/demo:v1
  2. rio stage --image=ibuildthecloud/demo:v3 test/srvmesh:v3
  3. rio weight test/srvmesh:v3=90%
  4. rio ps
  5. Grab endpoint
  6. In terminal - while true; do curl ; sleep 1; done

Results:
Notice that v1 is still being hit way more then v3. When it should be 90% v3. If I set weight to 100%, it's still hitting v1 at least 50% of the time

run --health-retries always has to be 1 so we shouldn't allow it to be changed

steps:

  1. rio run -n stack/service1 --health-cmd 'echo hello' --health-retries 2 nginx

results: service won't go active cause retries can only be 1
failed to create stack-3dfb19fa/service1 apps/v1beta2, Kind=Deployment for stack-service stack-3dfb19fa/service1: Deployment.apps "service1" is invalid: spec.template.spec.containers[0].livenessProbe.successThreshold: Invalid value: 2: must be 1

--expose setting host port on pod

Steps:

  1. rio run -n tsk1/texpose --expose 22:80 nginx
  2. rio inspect --format '{{.id}}' tsk1/texpose | cut -f1 -d: #to get
  3. rio kubectl get -n -o=json deploy/texpose1

Results: Host port is filled out
image

Can't start an inactive pod or stop a running one

Steps:

  1. Rio create --name test1 nginx

Results: Once that pod is created it's inactive. I thought I should be able to "rio start test1" to start the pod, but I couldn't do it. I also think we should be able to stop a pod.

Error message during `rio server` startup

sheng@ubuntu-sheng:~/Downloads/rio-v0.0.4-rc1-linux-amd64$ sudo rio server
INFO[0000] Starting Rio v0.0.4-rc1                      
INFO[0001] Creating CRD gateways.networking.istio.io    
INFO[0001] Creating CRD virtualservices.networking.istio.io 
INFO[0001] Waiting for CRD gateways.networking.istio.io to become available 
INFO[0001] Done waiting for CRD gateways.networking.istio.io to become available 
INFO[0001] Waiting for CRD virtualservices.networking.istio.io to become available 
INFO[0002] Done waiting for CRD virtualservices.networking.istio.io to become available 
INFO[0002] Creating CRD listenconfigs.space.cattle.io   
INFO[0002] Creating CRD services.rio.cattle.io          
INFO[0002] Waiting for CRD listenconfigs.space.cattle.io to become available 
INFO[0002] Creating CRD configs.rio.cattle.io           
INFO[0002] Creating CRD routesets.rio.cattle.io         
INFO[0002] Creating CRD volumes.rio.cattle.io           
INFO[0002] Creating CRD stacks.rio.cattle.io            
INFO[0002] Done waiting for CRD listenconfigs.space.cattle.io to become available 
INFO[0003] Listening on :7443                           
INFO[0003] Listening on :7080                           
INFO[0003] Client token is available at /var/lib/rancher/rio/server/client-token 
INFO[0003] Node token is available at /var/lib/rancher/rio/server/node-token 
INFO[0003] To use CLI: rio login -s https://192.168.6.147:7443 -t R10a3ba1932fbbf3981967edb20349be0d94c4cb572fcfa4b189d80814e225cdb1d::admin:0d520f6c665dc0307ef8c31439ce582f 
INFO[0003] To join node to cluster: rio agent -s https://192.168.6.147:7443 -t R10a3ba1932fbbf3981967edb20349be0d94c4cb572fcfa4b189d80814e225cdb1d::node:1427d55dd93abf59451c1aaff99ebb0a 
INFO[0004] Agent starting, logging to /var/lib/rancher/rio/agent/agent.log 
INFO[0004] 2018/10/15 17:33:39 http: TLS handshake error from 127.0.0.1:53970: remote error: tls: bad certificate 
ERRO[0005] ServiceController istio-095b8502/rio-lb [domain-controller] failed with : endpoints "istio-095b8502/rio-lb" not found 
ERRO[0005] ServiceController istio-095b8502/rio-lb [domain-controller] failed with : endpoints "istio-095b8502/rio-lb" not found 
ERRO[0006] ServiceController istio-095b8502/rio-lb [domain-controller] failed with : endpoints "istio-095b8502/rio-lb" not found 
INFO[0007] 2018/10/15 17:33:42 http: TLS handshake error from [::1]:39826: remote error: tls: bad certificate 
INFO[0007] Handling backend connection request [ubuntu-sheng] 
ERRO[0008] ServiceController istio-095b8502/rio-lb [domain-controller] failed with : endpoints "istio-095b8502/rio-lb" not found 
ERRO[0012] ServiceController istio-095b8502/rio-lb [domain-controller] failed with : endpoints "istio-095b8502/rio-lb" not found 
INFO[0036] 2018/10/15 17:34:11 http: TLS handshake error from 127.0.0.1:54308: remote error: tls: bad certificate 

Run --user and --group-add aren't working

Steps for --user:

  1. rio run -n tsk1/tuser --user demo-user nginx
  2. rio kubectl get -n -o=json deploy/tuser

Results:
No user is set in k8s

Steps for --group-add

  1. rio run -n tsk1/tgradd --group-add demo-group nginx
  2. rio inspect tsk1/tgradd

Results:
No group is set on rio side and is not being passed to k8s

no help for rio up

Steps:

  1. rio -h

results: there is no help associated text for rio up command. Please add.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.