Coder Social home page Coder Social logo

community's Introduction

banner

The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management

A+ good first issue follow on Twitter


What is KubeSphere

English | 中文

KubeSphere is a distributed operating system for cloud-native application management, using Kubernetes as its kernel. It provides a plug-and-play architecture, allowing third-party applications to be seamlessly integrated into its ecosystem. KubeSphere is also a multi-tenant container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, see Feature List for details.

The following screenshots give a close insight into KubeSphere. Please check What is KubeSphere for further information.

Workbench Project Resources
CI/CD Pipeline App Store

Demo environment

🎮 KubeSphere Lite provides you with free, stable, and out-of-the-box managed cluster service. After registration and login, you can easily create a K8s cluster with KubeSphere installed in only 5 seconds and experience feature-rich KubeSphere.

🖥 You can view the Demo Video to get started with KubeSphere.

Features

🕸 Provisioning Kubernetes Cluster Support deploy Kubernetes on any infrastructure, support online and air-gapped installation. Learn more.
🔗 Kubernetes Multi-cluster Management Provide a centralized control plane to manage multiple Kubernetes clusters, and support the ability to propagate an app to multiple K8s clusters across different cloud providers.
🤖 Kubernetes DevOps Provide GitOps-based CD solutions and use Argo CD to provide the underlying support, collecting CD status information in real time. With the mainstream CI engine Jenkins integrated, DevOps has never been easier. Learn more.
🔎 Cloud Native Observability Multi-dimensional monitoring, events and auditing logs are supported; multi-tenant log query and collection, alerting and notification are built-in. Learn more.
🧩 Service Mesh (Istio-based) Provide fine-grained traffic management, observability and tracing for distributed microservice applications, provides visualization for traffic topology. Learn more.
💻 App Store Provide an App Store for Helm-based applications, and offer application lifecycle management on Kubernetes platform. Learn more.
💡 Edge Computing Platform KubeSphere integrates KubeEdge to enable users to deploy applications on the edge devices and view logs and monitoring metrics of them on the console. Learn more.
📊 Metering and Billing Track resource consumption at different levels on a unified dashboard, which helps you make better-informed decisions on planning and reduce the cost. Learn more.
🗃 Support Multiple Storage and Networking Solutions
  • Support GlusterFS, CephRBD, NFS, LocalPV solutions, and provide CSI plugins to consume storage from multiple cloud providers.
  • Provide Load Balancer Implementation OpenELB for Kubernetes in bare-metal, edge, and virtualization.
  • Provides network policy and Pod IP pools management, support Calico, Flannel, Kube-OVN
  • ..
    🏘 Multi-tenancy Provide unified authentication with fine-grained roles and three-tier authorization system, and support AD/LDAP authentication.
    🧠 GPU Workloads Scheduling and Monitoring Create GPU workloads on the GUI, schedule GPU resources, and manage GPU resource quotas by tenant.

    Architecture

    KubeSphere uses a loosely-coupled architecture that separates the frontend from the backend. External systems can access the components of the backend through the REST APIs.

    Architecture


    Latest release

    🎉 KubeSphere v3.4.0 was released! It brings enhancements and better user experience, see the Release Notes For 3.4.0 for the updates.

    Component supported versions table

    Component Version K8s supported version
    Alerting N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Auditing v0.2.0 1.21,1.22,1.23,1.24,1.25,1.26
    Monitoring N/A 1.21,1.22,1.23,1.24,1.25,1.26
    DevOps v3.4.0 1.21,1.22,1.23,1.24,1.25,1.26
    EdgeRuntime v1.13.0 1.21,1.22,1.23
    Events N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Logging opensearch:v2.6.0
    fluentbit-operator: v0.14.0
    fluent-bit-tag: v1.9.4
    1.21,1.22,1.23,1.24,1.25,1.26
    Metrics Server v0.4.2 1.21,1.22,1.23,1.24,1.25,1.26
    Network N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Notification v2.3.0 1.21,1.22,1.23,1.24,1.25,1.26
    AppStore N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Storage pvc-autoresizer: v0.3.0
    storageclass-accessor: v0.2.2
    1.21,1.22,1.23,1.24,1.25,1.26
    ServiceMesh Istio: v1.14.6 1.21,1.22,1.23,1.24
    Gateway Ingress NGINX Controller: v1.3.1 1.21,1.22,1.23,1.24

    Installation

    KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any version-compatible Kubernetes cluster. The installer will start a minimal installation by default, you can enable other pluggable components before or after installation.

    Quick start

    Installing on K8s/K3s

    Ensure that your cluster has installed Kubernetes v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, or * v1.26.x. For Kubernetes versions with an asterisk, some features may be unavailable due to incompatibility.

    Run the following commands to install KubeSphere on an existing Kubernetes cluster:

    kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
    
    kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml

    All-in-one

    👨‍💻 No Kubernetes? You can use KubeKey to install both KubeSphere and Kubernetes/K3s in single-node mode on your Linux machine. Let's take K3s as an example:

    # Download KubeKey
    curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
    # Make kk executable
    chmod +x kk
    # Create a cluster
    ./kk create cluster --with-kubernetes v1.24.14 --container-manager containerd --with-kubesphere v3.4.0

    You can run the following command to view the installation logs. After KubeSphere is successfully installed, you can access the KubeSphere web console at http://IP:30880 and log in using the default administrator account ( admin/P@88w0rd).

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

    KubeSphere for hosted Kubernetes services

    KubeSphere is hosted on the following cloud providers, and you can try KubeSphere by one-click installation on their hosted Kubernetes services.

    You can also install KubeSphere on other hosted Kubernetes services within minutes, see the step-by-step guides to get started.

    👨‍💻 No internet access? Refer to

    the Air-gapped Installation on Kubernetes

    or Air-gapped Installation on Linux

    for instructions on how to use private registry to install KubeSphere.

    Guidance, discussion, contribution, and support

    We ❤️ your contribution. The community walks you through how to get started contributing KubeSphere. The development guide explains how to set up development environment.

    🤗 Please submit any KubeSphere bugs, issues, and feature requests to KubeSphere GitHub Issue.

    💟 The KubeSphere team also provides efficient official ticket support to respond in hours. For more information, click KubeSphere Online Support.

    Who are using KubeSphere

    The user case studies page includes the user list of the project. You can leave a comment to let us know your use case.

    Landscapes



        

    KubeSphere is a member of CNCF and a Kubernetes Conformance Certified platform , which enriches the CNCF CLOUD NATIVE Landscape.

    community's People

    Contributors

    calvinyv avatar chenz24 avatar duanjiong avatar faweizhao26 avatar forest-l avatar halil-bugol avatar hantmac avatar hlwanghl avatar jaycean avatar johnniang avatar junotx avatar ks-ci-bot avatar linuxsuren avatar liug-lynx avatar mvpzhangkai avatar patrickluoyu avatar rayzhou2017 avatar rolandma1986 avatar shaowenchen avatar shenhonglei avatar sherlock113 avatar spwangxp avatar swiftslee avatar tester-rep avatar wanjunlei avatar wansir avatar webup avatar yudong2015 avatar zheng1 avatar zryfish avatar

    Stargazers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    Watchers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    community's Issues

    hardware parameters in the large K8s cluster

    • Kubelet parameters
    maxPods: 110
    rotateCertificates: true
    kubeReserved:
      cpu: 200m
      memory: 250Mi
    systemReserved:
      cpu: 200m
      memory: 250Mi
    evictionHard:
      memory.available: 5%
    evictionSoft:
      memory.available: 10%
    evictionSoftGracePeriod: 
      memory.available: 2m
    evictionMaxPodGracePeriod: 120
    evictionPressureTransitionPeriod: 30s
    
    • Etcd parameters, refer to etcd
      image

    REQUEST: New membership for Dhruv

    GitHub Username

    @Dhruv Kela

    Requirements

    • Active contribution to the project. Contributed at least one notable PR to a specific SIG codebase within half a year
    • Finish one or more features
    • Sponsored by two members of the SIG and approved by the lead of the SIG
    • Help review PR from other contributors

    Sponsors

    @wanjunlei
    @benjaminhuo

    List of contributions to the KubeSphere project

    [Proposal] Setup a new SIG for Advocacy and Outreach

    Background

    Community growth is very important for any community. Advocacy and outreach can dramatically help with that. And the key point is that it should not be done by one person or privately.

    As far as I can see, there's a couple of forks are interested in this area.

    Why

    As an open-source community, we should discuss everything as possible as we can. Discuss advocacy and outreach in an open way is reasonable. Setup this kind of SIG can gather much more great ideas and innovation.

    Benefit

    • Less private conversations
    • A new channel for those who might want to join us
    • As a possible bridge between the software engineers, community managers, and marketing
    • Less unnecessary disturb to those who are not interested in this

    Who

    Basically, anyone who is interested in the discussion around advocacy and outreach. No matter you're a software engineer, technical writer, or marketing, community growth needs you.

    Potential members

    I guess these people might be the potential initial members:

    Concern

    • There are not enough people to join this SIG.
      • 3 people should be enough to startup it

    Reference

    https://www.jenkins.io/sigs/advocacy-and-outreach/

    [Proposal] Using GitHub Projects to Manage the Process of SIG-DevOps

    Background

    DevOps consists of a pipeline and s2i, including many repositories. SIG-DevOps can't display the progress and status of work Intuitively.

    Benefit

    • GitHub Projects is more friendly to the community
    • Everyone can get our progress and participate in it
    • Issues from different repositories will be aggregated

    Workflows

    • create a project named SIG-DevOps or DevOps Team
    • add columns In progress To do Done Refused Delay to show the different stages
    • review opened issues about DevOps and add them to different stages
    • when a new issue comes, deal with it

    Member

    @kubesphere/sig-devops

    Host ks cannot enter the member ks.

    Host ks cannot enter the member ks.
    导入了一个集群,但是鼠标指向集群,点击详细的时候,弹出"会话超时或此账户在其它地方登录,请重新登录"。

    [Proposal] Setup a new SIG for Metering

    As metering will be intergrated into KubeSphere as an independent module within toolbox. Metering is based on Prometheus API and provides user with consumption information about the cluster resource. So do we need setup a new SIG for metering or classify it into some other existing SIG?

    The division of technical writers

    There are many features and components in KubeSphere, each component has a high learning curve. A clear division of labor allows the technical writers to deep dive into the specific domains and be more efficient. From my point of view, I've summarized these directions (domains) for our technical writers to choose from. If there is anything been missed, please add your comment below.

    Mindmap: https://www.processon.com/view/link/600f7abe079129045d37d571#map

    image

    Let's schedule a meeting to discuss the division.

    php s2i example needed

    1. build a s2i-php-container
    2. add to KubeSphere
    3. how to use it

    save to sig-devops/examples/how-to-add-s2i-php-container.md

    /documentation

    An Network Error occurred while Connect to remote service with Telepresence

    arvin.he@vk ~/go/src/kubesphere.io/kubesphere (master●●)$ telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --expose 9090:9090 --run go run ./cmd/ks-apiserver/apiserver.go
    T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you
    T: can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see
    T: https://telepresence.io/reference/methods.html
    T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
    T: Starting network proxy to cluster by swapping out Deployment ks-apiserver with a proxy
    T: Forwarding remote port 9090 to local port 9090.
    
    T: Connected. Flushing DNS cache.
    T: Setup complete. Launching your command.
    Error: factory is not able to fill the pool: LDAP Result Code 200 "Network Error": dial tcp 10.233.71.55:389: i/o timeout
    Usage:
      ks-apiserver [flags]
    
    Flags:
          --access-token-max-age duration                   AccessTokenMaxAgeSeconds  control the lifetime of access tokens, 0 means no expiration. (default 24h0m0s)
          --add-dir-header                                  If true, adds the file directory to the header
          --agent-image string                              This field is used when generating deployment yaml for agent. (default "kubesphere/tower:v1.0")
          --alsologtostderr                                 log to standard error as well as files
          --auditing-elasticsearch-host string              Elasticsearch service host. KubeSphere is using elastic as auditing store, if this filed left blank, KubeSphere will use kubernetes builtin event API instead, and the following elastic search options will be ignored.
          --auditing-elasticsearch-version string           Elasticsearch major version, e.g. 5/6/7, if left blank, will detect automatically.Currently, minimum supported version is 5.x
          --auditing-enabled                                Enable auditing component or not.
          --auditing-index-prefix string                    Index name prefix. KubeSphere will retrieve auditing against indices matching the prefix. (default "ks-logstash-auditing")
          --auditing-webhook-url string                     Auditing wehook url
          --authenticate-max-retries int
          --authenticate-rate-limiter-duration duration      (default 30m0s)
          --authenticate-rate-limiter-max-retries int        (default 5)
          --authorization string                            Authorization setting, allowed values: AlwaysDeny, AlwaysAllow, RBAC. (default "RBAC")
          --bind-address string                             server bind address (default "0.0.0.0")
          --debug                                           Don't enable this if you don't know what it means.
          --elasticsearch-host string                       Elasticsearch logging service host. KubeSphere is using elastic as log store, if this filed left blank, KubeSphere will use kubernetes builtin log API instead, and the following elastic search options will be ignored. (default "http://elasticsearch-logging-data.kubesphere-logging-system.svc.cluster.local:9200")
          --elasticsearch-version string                    Elasticsearch major version, e.g. 5/6/7, if left blank, will detect automatically.Currently, minimum supported version is 5.x
          --enable-network-policy                           This field instructs KubeSphere to enable network policy or not.
      -h, --help                                            help for ks-apiserver
          --index-prefix string                             Index name prefix. KubeSphere will retrieve logs against indices matching the prefix. (default "ks-logstash-log")
          --insecure-port int                               insecure port number (default 9090)
          --istio-pilot-host string                         istio pilot discovery service url
          --jaeger-query-host string                        jaeger query service url
          --jenkins-host string                             Jenkins service host address. If left blank, means Jenkins is unnecessary. (default "http://ks-jenkins.kubesphere-devops-system.svc/")
          --jenkins-max-connections int                     Maximum allowed connections to Jenkins.  (default 100)
          --jenkins-password string                         Password for access to Jenkins service, used pair with username. (default "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQGt1YmVzcGhlcmUuaW8iLCJleHAiOjE4MTYyMzkwMjIsInVzZXJuYW1lIjoiYWRtaW4ifQ.okmNepQvZkBRe1M8z2HAWRN0AVj9ooVu79IafHKCjZI")
          --jenkins-username string                         Username for access to Jenkins service. Leave it blank if there isn't any. (default "admin")
          --jwt-secret string                               Secret to sign jwt token, must not be empty.
          --kubeconfig string                               Path for kubernetes kubeconfig file, if left blank, will use in cluster way. (default "/Users/arvin.he/.kube/config")
          --ldap-group-search-base string                   Ldap group search base. (default "ou=Groups,dc=kubesphere,dc=io")
          --ldap-host string                                Ldap service host, if left blank, all of the following ldap options will be ignored and ldap will be disabled. (default "openldap.kubesphere-system.svc:389")
          --ldap-manager-dn string                          Ldap manager account domain name. (default "cn=admin,dc=kubesphere,dc=io")
          --ldap-manager-password string                    Ldap manager account password. (default "P@88w0rd")
          --ldap-user-search-base string                    Ldap user search base. (default "ou=Users,dc=kubesphere,dc=io")
          --log-backtrace-at traceLocation                  when logging hits line file:N, emit a stack trace (default :0)
          --log-dir string                                  If non-empty, write log files in this directory
          --log-file string                                 If non-empty, use this log file
          --log-file-max-size uint                          Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
          --logtostderr                                     log to standard error instead of files (default true)
          --master string                                   Used to generate kubeconfig for downloading, if not specified, will use host in kubeconfig. (default "https://172.16.98.150:6443")
          --multiple-clusters                               This field instructs KubeSphere to enter multiple-cluster mode or not.
          --multiple-login                                  Allow multiple login with the same account, disable means only one user can login at the same time.
          --openpitrix-app-manager-endpoint string          OpenPitrix app manager endpoint (default "openpitrix-app-manager.openpitrix-system.svc:9102")
          --openpitrix-attachment-manager-endpoint string   OpenPitrix attachment manager endpoint (default "openpitrix-attachment-manager.openpitrix-system.svc:9122")
          --openpitrix-category-manager-endpoint string     OpenPitrix category manager endpoint (default "openpitrix-category-manager.openpitrix-system.svc:9113")
          --openpitrix-cluster-manager-endpoint string      OpenPitrix cluster manager endpoint (default "openpitrix-cluster-manager.openpitrix-system.svc:9104")
          --openpitrix-repo-indexer-endpoint string         OpenPitrix repo indexer endpoint (default "openpitrix-repo-indexer.openpitrix-system.svc:9108")
          --openpitrix-repo-manager-endpoint string         OpenPitrix repo manager endpoint (default "openpitrix-repo-manager.openpitrix-system.svc:9101")
          --openpitrix-runtime-manager-endpoint string      OpenPitrix runtime manager endpoint (default "openpitrix-runtime-manager.openpitrix-system.svc:9103")
          --prometheus-endpoint string                      Prometheus service endpoint which stores KubeSphere monitoring data, if left blank, will use builtin metrics-server as data source. (default "http://prometheus-k8s.kubesphere-monitoring-system.svc:9090")
          --proxy-publish-address string                    Public address of tower, APIServer will use this field as proxy publish address. This field takes precedence over field proxy-publish-service. For example, http://139.198.121.121:8080.
          --proxy-publish-service string                    Service name of tower. APIServer will use its ingress address as proxy publish address.For example, tower.kubesphere-system.svc.
          --redis-db int
          --redis-host string                               Redis connection URL. If left blank, means redis is unnecessary, redis will be disabled. (default "redis.kubesphere-system.svc")
          --redis-password string
          --redis-port int                                   (default 6379)
          --s3-access-key-id string                         access key of s2i s3 (default "openpitrixminioaccesskey")
          --s3-bucket string                                bucket name of s2i s3 (default "s2i-binaries")
          --s3-disable-SSL                                  disable ssl (default true)
          --s3-endpoint string                              Endpoint to access to s3 object storage service, if left blank, the following options will be ignored. (default "http://minio.kubesphere-system.svc:9000")
          --s3-force-path-style                             force path style (default true)
          --s3-region string                                Region of s3 that will access to, like us-east-1. (default "us-east-1")
          --s3-secret-access-key string                     secret access key of s2i s3 (default "openpitrixminiosecretkey")
          --s3-session-token string                         session token of s2i s3
          --secure-port int                                 secure port number
          --servicemesh-prometheus-host string              prometheus service for servicemesh
          --skip-headers                                    If true, avoid header prefixes in the log messages
          --skip-log-headers                                If true, avoid headers when opening log files
          --sonarqube-host string                           Sonarqube service address, if left empty, following sonarqube options will be ignored. (default "http://172.16.98.150:32297")
          --sonarqube-token string                          Sonarqube service access token. (default "4e51de276f1fd0eb3a20b58e523d43ce76347302")
          --stderrthreshold severity                        logs at or above this threshold go to stderr (default 2)
          --tls-cert-file string                            tls cert file
          --tls-private-key string                          tls private key
      -v, --v Level                                         number for the log level verbosity
          --vmodule moduleSpec                              comma-separated list of pattern=N settings for file-filtered logging
    
    2020/06/24 22:33:51 factory is not able to fill the pool: LDAP Result Code 200 "Network Error": dial tcp 10.233.71.55:389: i/o timeout
    exit status 1
    T: Your process exited with return code 1.
    T: Exit cleanup in progress
    T: Swapping Deployment ks-apiserver back to its original state
    arvin.he@vk ~/go/src/kubesphere.io/kubesphere (master●●)$
    

    kubesphere.yaml is located below:

    arvin.he@vk ~/go/src/kubesphere.io/kubesphere (master●●)$ ls
    CONTRIBUTING.md  OWNERS           README_zh.md     build            coverage.txt     go.mod           install          telepresence.log vendor
    LICENSE          PROJECT          api              cmd              doc.go           go.sum           kubesphere.yaml  test
    Makefile         README.md        bin              config           docs             hack             pkg              tools
    

    kubesphere.yaml content as below:

    kubernetes:
      kubeconfig: "/Users/arvin.he/.kube/config"
      master: https://172.16.98.150:6443
      qps: 1e+06
      burst: 1000000
    
    ldap:
      host: openldap.kubesphere-system.svc:389
      managerDN: cn=admin,dc=kubesphere,dc=io
      managerPassword: P@88w0rd
      userSearchBase: ou=Users,dc=kubesphere,dc=io
      groupSearchBase: ou=Groups,dc=kubesphere,dc=io
    
    redis:
      host: redis.kubesphere-system.svc
      port: 6379
      password: ""
      db: 0
    
    s3:
      endpoint: http://minio.kubesphere-system.svc:9000
      region: us-east-1
      disableSSL: true
      forcePathStyle: true
      accessKeyID: openpitrixminioaccesskey
      secretAccessKey: openpitrixminiosecretkey
      bucket: s2i-binaries
    
    mysql:
      host: mysql.kubesphere-system.svc:3306
      username: root
      password: password
      maxIdleConnections: 100
      maxOpenConnections: 100
      maxConnectionLifeTime: 10s
    
    devops:
      host: http://ks-jenkins.kubesphere-devops-system.svc/
      username: admin
      password: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQGt1YmVzcGhlcmUuaW8iLCJleHAiOjE4MTYyMzkwMjIsInVzZXJuYW1lIjoiYWRtaW4ifQ.okmNepQvZkBRe1M8z2HAWRN0AVj9ooVu79IafHKCjZI
      maxConnections: 100
    
    sonarQube:
      host: http://172.16.98.150:32297
      token: 4e51de276f1fd0eb3a20b58e523d43ce76347302
    
    openpitrix:
      runtimeManagerEndpoint:    "openpitrix-runtime-manager.openpitrix-system.svc:9103"
      clusterManagerEndpoint:    "openpitrix-cluster-manager.openpitrix-system.svc:9104"
      repoManagerEndpoint:       "openpitrix-repo-manager.openpitrix-system.svc:9101"
      appManagerEndpoint:        "openpitrix-app-manager.openpitrix-system.svc:9102"
      categoryManagerEndpoint:   "openpitrix-category-manager.openpitrix-system.svc:9113"
      attachmentManagerEndpoint: "openpitrix-attachment-manager.openpitrix-system.svc:9122"
      repoIndexerEndpoint:       "openpitrix-repo-indexer.openpitrix-system.svc:9108"
    
    monitoring:
      endpoint: http://prometheus-k8s.kubesphere-monitoring-system.svc:9090
      secondaryEndpoint: http://prometheus-k8s-system.kubesphere-monitoring-system.svc:9090
    
    logging:
      host: http://elasticsearch-logging-data.kubesphere-logging-system.svc.cluster.local:9200
      indexPrefix: ks-logstash-log
    
    alerting:
      endpoint: http://alerting.kubesphere-alerting-system.svc
    
    notification:
      endpoint: http://notification.kubesphere-alerting-system.svc
    

    Test Telepresence with KubeSphere apigateway:

    arvin.he@vk ~/go/src/kubesphere.io/kubesphere (master●●)$ curl http://ks-apigateway.kubesphere-system
    401 Unauthorized
    

    Build controller-manager image error

    The environment

    centos 7.7
    kubesphere 2.1.1
    docker 19.03.8

    controller-manager Dockerfile

    `FROM golang:1.12 as controller-manager-builder

    COPY / /go/src/kubesphere.io/kubesphere
    WORKDIR /go/src/kubesphere.io/kubesphere

    RUN CGO_ENABLED=0 GO111MODULE=on GOOS=linux GOARCH=amd64 GOFLAGS=-mod=vendor go build --ldflags "-extldflags -static" -o controller-manager ./cmd/controller-manager/

    FROM alpine:3.7
    _RUN echo -e "https://mirrors.ustc.edu.cn/alpine/latest-stable/main\nhttps://mirrors.ustc.edu.cn/alpine/latest-stable/community" > /etc/apk/repositories && _
    apk add --update ca-certificates && update-ca-certificates
    COPY --from=controller-manager-builder /go/src/kubesphere.io/kubesphere/controller-manager /usr/local/bin/
    CMD controller-manager`

    Shell

    docker build -f build/ks-controller-manager/Dockerfile -t harbor.xx.net.cn:8443/ks-controller-manager:2.1.1 .

    Output

    Sending build context to Docker daemon  141.6MB
    Step 1/8 : FROM golang:1.12 as controller-manager-builder
     ---> ffcaee6f7d8b
    Step 2/8 : COPY / /go/src/kubesphere.io/kubesphere
     ---> ce4ec5aec998
    Step 3/8 : WORKDIR /go/src/kubesphere.io/kubesphere
     ---> Running in 653b62e681e8
    Removing intermediate container 653b62e681e8
     ---> fd87a5dbab82
    Step 4/8 : RUN CGO_ENABLED=0 GO111MODULE=on GOOS=linux GOARCH=amd64 GOFLAGS=-mod=vendor go build --ldflags "-extldflags -static" -o controller-manager ./cmd/controller-manager/
     ---> Running in a9dda2856eed
    Removing intermediate container a9dda2856eed
     ---> 88b836d3e8e7
    Step 5/8 : FROM alpine:3.7
     ---> 6d1ef012b567
    Step 6/8 : RUN echo -e "https://mirrors.ustc.edu.cn/alpine/latest-stable/main\nhttps://mirrors.ustc.edu.cn/alpine/latest-stable/community" > /etc/apk/repositories && apk add --update ca-certificates && update-ca-certificates
     ---> Running in 4e038f46cf66
    fetch https://mirrors.ustc.edu.cn/alpine/latest-stable/main/x86_64/APKINDEX.tar.gz
    fetch https://mirrors.ustc.edu.cn/alpine/latest-stable/community/x86_64/APKINDEX.tar.gz
    (1/2) Installing libcrypto1.1 (1.1.1g-r0)
    **ERROR: libcrypto1.1-1.1.1g-r0: trying to overwrite etc/ssl/openssl.cnf owned by libressl2.6-libcrypto-2.6.5-r0.**
    (2/2) Installing ca-certificates (20191127-r1)
    Executing busybox-1.27.2-r11.trigger
    Executing ca-certificates-20191127-r1.trigger
    1 error; 8 MiB in 15 packages
    **The command '/bin/sh -c echo -e "https://mirrors.ustc.edu.cn/alpine/latest-stable/main\nhttps://mirrors.ustc.edu.cn/alpine/latest-stable/community" > /etc/apk/repositories && apk add --update ca-certificates && update-ca-certificates' returned a non-zero code: 1**
    

    [Proposal] A Guide to Contribute Community

    Background

    For someone who wants to contribute, he is confused.

    • How to participate in contributing?
    • How to submit Issues?
    • How to submit code?
    • How to submit the documentation?
    • How to participate in conferences?

    Benefit

    Everyone who enters the community, know how to contribute.

    Workflows

    • Create a list of how to contribute, just like
      • translate documents
      • create an issue for the bug
      • fix an issue
      • add test unit
      • join a weekly meeting
      • ...
    • Provide a workflow to the above list. For example, fixing an issue:

    Member

    @FeynmanZhou
    @zryfish
    @LinuxSuRen
    @shaowenchen

    Reference

    Is kubesphere-kubevirt support windows vm?

    Thanks for the kubesphere-kubevirt sharing in yesterday noon, it's wonderful and helpful.

    We wonder to know wheather the kubesphere-kubevirt support windows vm,
    or may you provide a sample of kubevirt windows vm pod ?

    Thanks a lot.

    Contribute to KubeSphere Blogs

    As we mentioned last week, we expect every one of the community to contribute an English blog to us. These blogs will be published on the official website and some other media channels. You can write them based on your skilled part.

    We suggest you to start with a couple of pain points regarding the current technical state, then elaborate on the solutions or capabilities provided by KubeSphere. Why and How to should be included in your blog.

    BTW, we are pretty flexible about the type of content, length and style. A contributed blog doesn’t have to be 5,000 words of heavy teaching. It can be around 1000 words outlining a KubeSphere-related technical story, an introduction or a tutorial. Code snippets are fine.

    The content should be writing in markdown, and the images should be stored in QingStor object storage, just paste the image URL into the content. This is an example blog you can reference the content framework.

    You can draft a topic or your thought in this issue firstly, thus we can avoid the duplicated content. Others can also evaluate your topic or give you some inspiration. You can submit a PR to this repository when you finish the blog. We hope you can finish it before Apr, 13.

    How to generate DevOps kubeconfig on AWS

    1. create ServiceAccount

    devops-deploy.yaml , take the namespace kubesphere-sample-dev as an example.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: devops-deploy
      namespace: kubesphere-sample-dev
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: devops-deploy-role
      namespace: kubesphere-sample-dev
    rules:
    - apiGroups:
      - "*"
      resources:
      - "*"
      verbs:
      - "*"
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: devops-deploy-rolebinding
      namespace: kubesphere-sample-dev
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: devops-deploy-role
    subjects:
    - kind: ServiceAccount
      name: devops-deploy
      namespace: kubesphere-sample-dev
    kubectl apply -f devops-deploy.yaml
    1. Get Service Account Token
    export TOKEN_NAME=$(kubectl -n kubesphere-sample-dev get sa devops-deploy -o jsonpath='{.secrets[0].name}')
    kubectl -n kubesphere-sample-dev get secret "${TOKEN_NAME}" -o jsonpath='{.data.token}' | base64 -d
    
    xxxxxxxx...
    1. Replace kubeconfig on Web

    image

    This kubecofig is incorrect. You should replace

      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FUR...
        client-key-data: LS0tLS1CRUdJTiBQUk...
    

    by

      user:
        token: xxxxxxxx...
    
    1. Enjoy DevOps Deploy

    image

    [Proposal] Create a GitHub Team for Every SIG

    Background

    During the discussion phase, issues and pull requests are directly linked to individuals. This is unreasonable. All members of the SIG are responsible for the SIG.

    Benefit

    All members of SIG will be notified. This will reduce the burden on individuals and speed up the processing of work. It will also avoid the impact that occurs when someone leaves the community.

    Workflows

    1. create a team in https://github.com/orgs/kubesphere/teams, e.g sig-devops
    2. add members to sig-devops
    3. use @kubesphere/sig-devops replace @shaowenchen

    Member

    @rayzhou2017
    @zryfish
    @FeynmanZhou
    @LinuxSuRen

    KubeSphere DevOps Community Meeting Notes / Agenda

    Meeting Time

    Beijing Time: Thursday at 16:30-17:30 Nov. 5, 2020

    Links

    Organizers

    Timeline

    Nov. 5, 2020 (Beijing Time)

    Agenda:

    1. How to build a development environment: Dev and Ops
    2. Development List of DevOps 3.1.0

    [Proposal] Perform a Pre-release and Get a Release Guideline

    Background

    • release process relies on the special people
    • release process of release is not clear to the community
    • release time can't be evaluated

    Why

    We need to do a pre-release, which is used to test the process. At the same time, a pre-release will provide a checklist and guideline for the next release version 3.1.0.

    Benefit

    • allow anyone to publish a release, don't need special people
    • make the whole process automated, and more efficient
    • avoid manual omissions and errors

    Workflows

    1. deploy a harbor server for the workflows, e.g harbor.com.
    2. transfer the images from harbor.com/kubespheredev to harbor.com/kubesphere/
    3. prepare the release note for the version
    4. update readme 、 create a branch、make a release version for repositories: kubesphere、ks-installer、website
    5. prepare the pr for the version
    6. promote the release in forums、wechat、slack、Twitter、medium...

    Members

    @calvinyv
    @zryfish
    @pixiake
    @LinuxSuRen
    @shaowenchen

    How to change the mode of alerting module installation to chart mode

    Welcome to the discussion, Current installation instructions for alerting module:

    1. Rely on Redis, Mysql, and Etcd.
    2. Perform Rolebinding of ks-alerting.
    3. Perform the two job files that contain alerting-db-init-job.yaml and alerting-db-ctrl-job.yaml.
    4. Perform the four deployment files that contain 1-executor.yaml、2-watcher.yaml、3-manager.yaml and 4-client.yaml.
    5. The address of the relevant document: ks-alerting

    REQUEST: New membership for lshmouse

    GitHub Username

    @lshmouse

    Requirements

    • Active contribution to the project. Contributed at least one notable PR to a specific SIG codebase within half a year
    • Finish one or more features
    • Sponsored by two members of the SIG and approved by the lead of the SIG
    • Help review PR from other contributors

    Sponsors

    @FeynmanZhou
    @pixiake

    List of contributions to the KubeSphere project

    kubesphere/kubekey#542
    kubesphere/kubekey#544
    kubesphere/kubekey#547
    kubesphere/image-sync-config#7

    REQUEST: New membership for Bojan

    GitHub Username

    @Bojan

    Requirements

    • Active contribution to the project. Contributed at least one notable PR to a specific SIG codebase within half a year
    • Finish one or more features
    • Sponsored by two members of the SIG and approved by the lead of the SIG
    • Help review PR from other contributors

    Sponsors

    @FeynmanZhou
    @benjaminhuo

    List of contributions to the KubeSphere project

    Add syslog output to Fluentbit-operator: fluent/fluent-operator#49

    Register a CNCF Webinar

    @duanjiong @zheng1 Please refer to this format and provide the corresponding topic and outline for Porter, as well as its scenario with KubeSphere. See this example for details. Please also give an available register date for the CNCF webinar, how about 4th, November?

    image

    Recommend Projects

    • React photo React

      A declarative, efficient, and flexible JavaScript library for building user interfaces.

    • Vue.js photo Vue.js

      🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

    • Typescript photo Typescript

      TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

    • TensorFlow photo TensorFlow

      An Open Source Machine Learning Framework for Everyone

    • Django photo Django

      The Web framework for perfectionists with deadlines.

    • D3 photo D3

      Bring data to life with SVG, Canvas and HTML. 📊📈🎉

    Recommend Topics

    • javascript

      JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

    • web

      Some thing interesting about web. New door for the world.

    • server

      A server is a program made to process requests and deliver data to clients.

    • Machine learning

      Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

    • Game

      Some thing interesting about game, make everyone happy.

    Recommend Org

    • Facebook photo Facebook

      We are working to build community through open source technology. NB: members must have two-factor auth.

    • Microsoft photo Microsoft

      Open source projects and samples from Microsoft.

    • Google photo Google

      Google ❤️ Open Source for everyone.

    • D3 photo D3

      Data-Driven Documents codes.