Coder Social home page Coder Social logo

nacos-group / nacos-k8s Goto Github PK

View Code? Open in Web Editor NEW
564.0 564.0 465.0 48.09 MB

This project contains a Nacos Docker image meant to facilitate the deployment of Nacos on Kubernetes using StatefulSets.

Dockerfile 1.07% Shell 1.18% Mustache 1.06% Makefile 3.39% Go 90.56% Smarty 2.73%

nacos-k8s's Introduction

Kubernetes Nacos

This project contains a Nacos Docker image meant to facilitate the deployment of Nacos on Kubernetes via StatefulSets.

中文文档

Tips

If you are using Nacos version 1.1.4 or lower,, please refer to this Tag

It is recommended to deploy Nacos in Kubernetes using Nacos Operator.

Quick Start

  • Clone Project
git clone https://github.com/nacos-group/nacos-k8s.git
  • Simple Start

If you want to start Nacos without NFS, but emptyDirs will possibly result in a loss of data. as follows:

cd nacos-k8s
chmod +x quick-startup.sh
./quick-startup.sh
  • Testing

    • Service registration
    curl -X PUT 'http://cluster-ip:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.10&port=8080'
    • Service discovery
    curl -X GET 'http://cluster-ip:8848/nacos/v1/ns/instance/list?serviceName=nacos.naming.serviceName'
    • Publish config
    curl -X POST "http://cluster-ip:8848/nacos/v1/cs/configs?dataId=nacos.cfg.dataId&group=test&content=helloWorld"
    • Get config
    curl -X GET "http://cluster-ip:8848/nacos/v1/cs/configs?dataId=nacos.cfg.dataId&group=test"

Advanced

Tips

If you use a custom database, please initialize the database script yourself first. https://github.com/alibaba/nacos/blob/develop/distribution/conf/mysql-schema.sql

In advanced use, the cluster is automatically scaled and data is persisted, but PersistentVolumeClaims must be deployed. In this example, NFS is used.

Deploy NFS

  • Create Role
kubectl create -f deploy/nfs/rbac.yaml

If your K8S namespace is not default, execute the following script before creating RBAC

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/nfs/rbac.yaml
  • Create ServiceAccount And deploy NFS-Client Provisioner
kubectl create -f deploy/nfs/deployment.yaml
  • Create NFS StorageClass
kubectl create -f deploy/nfs/class.yaml
  • Verify that NFS is working
kubectl get pod -l app=nfs-client-provisioner

Deploy database

  • Deploy mysql
cd nacos-k8s

kubectl create -f deploy/mysql/mysql-nfs.yaml
  • Verify that Database is working
kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
mysql-gf2vd                        1/1     Running   0          111m

Deploy Nacos

  • Modify deploy/nacos/nacos-pvc-nfs.yaml
data:
  mysql.host: "db host"
  mysql.db.name: "db name"
  mysql.port: " db port"
  mysql.user: " db username"
  mysql.password: " db password"
  • Create Nacos
kubectl create -f nacos-k8s/deploy/nacos/nacos-pvc-nfs.yaml
  • Verify that Nacos is working
kubectl get pod -l app=nacos


NAME      READY   STATUS    RESTARTS   AGE
nacos-0   1/1     Running   0          19h
nacos-1   1/1     Running   0          19h
nacos-2   1/1     Running   0          19h

Scale Testing

  • Use kubectl exec to get the cluster config of the Pods in the nacos StatefulSet.
for i in 0 1; do echo nacos-$i; kubectl exec nacos-$i cat conf/cluster.conf; done

The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of <statefulset name>-<ordinal index>. Because the replicas field of the nacos StatefulSet is set to 2, In the cluster file only two nacos address

k8s

  • Use kubectl to scale StatefulSets
kubectl scale sts nacos --replicas=3

scale

  • Use kubectl exec to get the cluster config of the Pods in the nacos StatefulSet after scale StatefulSets
for i in 0 1 2; do echo nacos-$i; kubectl exec nacos-$i cat conf/cluster.conf; done

get_cluster_after

  • Use kubectl exec to get the state of the Pods in the nacos StatefulSet after scale StatefulSets
for i in 0 1 2; do echo nacos-$i; kubectl exec nacos-$i curl GET "http://localhost:8848/nacos/v1/ns/raft/state"; done

You can find that the new node has joined the cluster

Prerequisites

  • Kubernetes Node configuration(for reference only)
Hostname Configuration
k8s-master CentOS Linux release 7.4.1708 (Core) Single-core processor Mem 4G Cloud disk 40G
node01 CentOS Linux release 7.4.1708 (Core) Single-core processor Mem 4G Cloud disk 40G
node02 CentOS Linux release 7.4.1708 (Core) Single-core processor Mem 4G Cloud disk 40G
  • Kubernetes version:1.12.2+
  • NFS version:4.1+

Limitations

  • Persistent Volumes must be used. emptyDirs will possibly result in a loss of data

Project directory

Directory Name Description
plugin Help Nacos cluster achieve automatic scaling in K8s
deploy Deploy the required files

Configuration properties

  • nacos-pvc-nfs.yaml or nacos-quick-start.yaml
Name Required Description
mysql.db.name Y database name
mysql.port N database port
mysql.user Y database username
mysql.password Y database password
SPRING_DATASOURCE_PLATFORM Y Database type,The default is embedded database,parameters only support mysql or embedded
NACOS_REPLICAS Y The number of clusters must be consistent with the value of the replicas attribute
NACOS_SERVER_PORT N Nacos port,default:8848 for Peer-finder plugin
NACOS_APPLICATION_PORT N Nacos port, default:8848
PREFER_HOST_MODE Y Enable Nacos cluster node domain name support
  • nfs deployment.yaml
Name Required Description
NFS_SERVER Y NFS server address
NFS_PATH Y NFS server shared directory
server Y NFS server address
path Y NFS server shared directory
  • mysql yaml
Name Required Description
MYSQL_ROOT_PASSWORD N Root password
MYSQL_DATABASE Y Database Name
MYSQL_USER Y Database Username
MYSQL_PASSWORD Y Database Password
Nfs:server Y NFS server address
Nfs:path Y NFS server shared path

nacos-k8s's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nacos-k8s's Issues

Failed to pull image "nacos/nacos-peer-finder-plugin:latest"

Deploy nacos step by step with advanced section

kubectl create -f nacos-k8s/deploy/nacos/nacos-pvc-nfs.yaml

Create nacos failed,

pod has unbound immediate PersistentVolumeClaims (repeated 4 times) Failed to pull image "nacos/nacos-peer-finder-plugin:latest": rpc error: code = Unknown desc = context canceled Back-off restarting failed container

image

Nacos version: nacos/nacos-server:1.0.0
OS: Centos 7.x

Kubernetes 安装与配置能否提供下代码

Centos 7.4
yum install -y kubernetes docker flannel docker-compse
我的 kubernetes 可能是配置不对。
cd nacos-k8s
[root@VM_0_9_centos nacos-k8s]# kubectl create -f deploy/nfs/rbac.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

k8s安装后报错

java.lang.IllegalStateException: unable to find local peer: nacos-0.nacos-headless.default.svc.cluster.local.:8848, all peers: [nacos-0.nacos-headless.default.svc.cluster.local:8848, nacos-1.nacos-headless.default.svc.cluster.local:8848, nacos-2.nacos-headless.default.svc.cluster.local:8848]
at com.alibaba.nacos.naming.consistency.persistent.raft.RaftPeerSet.local(RaftPeerSet.java:211)
at com.alibaba.nacos.naming.monitor.PerformanceLoggerThread.collectmetrics(PerformanceLoggerThread.java:123)
at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

集群中leader无法被选举 WARN [IS LEADER] no leader is available now!

我使用的是提供的快速启动项起的集群。之后将cloud应用部署进k8s抛出异常
java.lang.IllegalStateException: failed to req API:/nacos/v1/ns/service/list after all servers([nacos-headless:8848]) tried: failed to req API:http://nacos-headless:8848/nacos/v1/ns/service/list. code:503 msg: server is STARTING now, please try again later!
at com.alibaba.nacos.client.naming.net.NamingProxy.reqAPI(NamingProxy.java:380) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.reqAPI(NamingProxy.java:346) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.reqAPI(NamingProxy.java:294) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.getServiceList(NamingProxy.java:276) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.getServiceList(NamingProxy.java:252) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.NacosNamingService.getServicesOfServer(NacosNamingService.java:525) ~[nacos-client-1.0.0.jar!/:na]
at org.springframework.cloud.alibaba.nacos.discovery.NacosWatch.nacosServicesWatch(NacosWatch.java:127) ~[spring-cloud-alibaba-nacos-discovery-0.9.0.RELEASE.jar!/:0.9.0.RELEASE]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) ~[spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_111]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_111]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]

2019-05-10 06:34:03.492 ERROR 1 --- [ main] o.s.c.a.n.registry.NacosServiceRegistry : nacos registry, data-handle register failed...NacosRegistration{nacosDiscoveryProperties=NacosDiscoveryProperties{serverAddr='nacos-headless:8848', endpoint='', namespace='', watchDelay=30000, logName='', service='data-handle', weight=1.0, clusterName='DEFAULT', namingLoadCacheAtStart='false', metadata={preserved.register.source=SPRING_CLOUD}, registerEnabled=true, ip='10.244.2.21', networkInterface='', port=8091, secure=false, accessKey='', secretKey=''}},

java.lang.IllegalStateException: failed to req API:/nacos/v1/ns/instance after all servers([nacos-headless:8848]) tried: failed to req API:http://nacos-headless:8848/nacos/v1/ns/instance. code:503 msg: server is STARTING now, please try again later!
at com.alibaba.nacos.client.naming.net.NamingProxy.reqAPI(NamingProxy.java:380) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.reqAPI(NamingProxy.java:304) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.net.NamingProxy.registerService(NamingProxy.java:186) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.NacosNamingService.registerInstance(NacosNamingService.java:298) ~[nacos-client-1.0.0.jar!/:na]
at com.alibaba.nacos.client.naming.NacosNamingService.registerInstance(NacosNamingService.java:279) ~[nacos-client-1.0.0.jar!/:na]
at org.springframework.cloud.alibaba.nacos.registry.NacosServiceRegistry.register(NacosServiceRegistry.java:63) ~[spring-cloud-alibaba-nacos-discovery-0.9.0.RELEASE.jar!/:0.9.0.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.register(AbstractAutoServiceRegistration.java:239) [spring-cloud-commons-2.1.1.RELEASE.jar!/:2.1.1.RELEASE]
at org.springframework.cloud.alibaba.nacos.registry.NacosAutoServiceRegistration.register(NacosAutoServiceRegistration.java:74) [spring-cloud-alibaba-nacos-discovery-0.9.0.RELEASE.jar!/:0.9.0.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.start(AbstractAutoServiceRegistration.java:138) [spring-cloud-commons-2.1.1.RELEASE.jar!/:2.1.1.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.bind(AbstractAutoServiceRegistration.java:101) [spring-cloud-commons-2.1.1.RELEASE.jar!/:2.1.1.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.onApplicationEvent(AbstractAutoServiceRegistration.java:88) [spring-cloud-commons-2.1.1.RELEASE.jar!/:2.1.1.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.onApplicationEvent(AbstractAutoServiceRegistration.java:47) [spring-cloud-commons-2.1.1.RELEASE.jar!/:2.1.1.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:402) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:359) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:166) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552) [spring-context-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) [spring-boot-2.1.3.RELEASE.jar!/:2.1.3.RELEASE]
at com.micro.data.handle.DataHandleApplication.main(DataHandleApplication.java:41) [classes!/:1.0-SNAPSHOT]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_111]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [micro-service-data-handle-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [micro-service-data-handle-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [micro-service-data-handle-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [micro-service-data-handle-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]

进一步检查后发现在nacos节点的naming-raft.log日志中发现集群leader没有选举出来
WARN [IS LEADER] no leader is available now!

我用的环境是k8s 1.8版本,想知道是配置的问题还是环境版本太低

use nginx-ingress error occour

配置使用 nginx-ingress,控制台能正常访问登录,注册服务出现异常
curl -X POST 'http://host-name/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.10&port=8080' dom not found: nacos.naming.serviceName

访问nacos的服务列表报错

server is STARTING now,please try again later!
查看日志后,
[root@nacos-0 logs]# tail naming-raft.log
2019-04-30 09:15:45,000 WARN [IS LEADER] no leader is available now!

2019-04-30 09:16:00,001 WARN [IS LEADER] no leader is available now!

2019-04-30 09:16:15,000 WARN [IS LEADER] no leader is available now!

2019-04-30 09:16:30,000 WARN [IS LEADER] no leader is available now!

2019-04-30 09:16:45,000 WARN [IS LEADER] no leader is available now!

[root@nacos-0 logs]# tail naming-ephemeral.log
2019-04-30 09:17:28,671 INFO waiting server list init...

2019-04-30 09:17:29,671 INFO waiting server list init...

2019-04-30 09:17:30,672 INFO waiting server list init...

2019-04-30 09:17:31,672 INFO waiting server list init...

2019-04-30 09:17:32,672 INFO waiting server list init...

怎么解决

关于所有yaml配置文件中环境变量NACOS_SERVERS

当然这可能是我眼神不好,太粗心的问题
但是我还是希望说能够在readme里面简单说一下环境变量NACOS_SERVERS的
nacos-0.nacos-headless.default.svc.cluster.local:8848,是默认在default的命名空间下才有效,如果在不同的命名空间应该修改default为正在使用的命名空间

bin/docker-startup.sh: no such file or directory

Error: failed to start container "nacos": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"bin/docker-startup.sh\": stat bin/docker-startup.sh: no such file or directory": unknown

请问为什么 nacos/nacos-server:latest 容器启动时报以上这个错误?

k8s方式启动成功后,配置管理台可以使用,java无法连接

k8s启动3个StatefulSet,启动成功,配置台可用,
java连接报错server is STARTING now, please try again later!,查看日志naming-rafe.log,WARN [IS LEADER] no leader is available now!
进入pod检查, cluster.conf有三条,正常,ping也可以正常ping通。找不到原因了,请问有遇到过吗?使用的是nacos 1.0.0的tag打的镜像

Nacos k8s psv after created, the plugin doesn't work util I exec in that docker and # sh plugin.sh

the yaml:
`

apiVersion: v1
kind: Service
metadata:
name: nacos-headless
labels:
app: nacos
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 8848
name: server
targetPort: 8848
clusterIP: None
selector:
app: nacos

apiVersion: v1
kind: ConfigMap
metadata:
name: nacos-cm
data:
mysql.master.db.name: "nacos_devtest"
mysql.master.port: "3306"
mysql.slave.port: "3306"
mysql.master.user: "nacos"
mysql.master.password: "nacos"

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nacos
spec:
serviceName: nacos-headless
replicas: 2
template:
metadata:
labels:
app: nacos
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- nacos
topologyKey: "kubernetes.io/hostname"
serviceAccountName: nfs-client-provisioner
initContainers:
- name: peer-finder-plugin-install
image: hub.dtwarebase.tech/dop/nacos-peer-finder-plugin:latest
imagePullPolicy: Always
volumeMounts:
- mountPath: "/home/nacos/plugins/peer-finder"
name: plguindir
containers:
- name: nacos
imagePullPolicy: Always
image: hub.dtwarebase.tech/dop/nacos-server:latest
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8848
name: client-port
env:
- name: NACOS_REPLICAS
value: "3"
- name: SERVICE_NAME
value: "nacos-headless"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: MYSQL_MASTER_SERVICE_DB_NAME
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.master.db.name
- name: MYSQL_MASTER_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.master.port
- name: MYSQL_SLAVE_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.slave.port
- name: MYSQL_MASTER_SERVICE_USER
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.master.user
- name: MYSQL_MASTER_SERVICE_PASSWORD
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.master.password
- name: NACOS_SERVER_PORT
value: "8848"
- name: PREFER_HOST_MODE
value: "hostname"
- name: MYSQL_SLAVE_SERVICE_HOST
value: mysql-master
- name: MYSQL_MASTER_SERVICE_HOST
value: mysql-slave
readinessProbe:
httpGet:
port: client-port
path: /nacos/v1/console/health/readiness
initialDelaySeconds: 60
timeoutSeconds: 3
#livenessProbe:
# httpGet:
# port: client-port
# path: /nacos/v1/console/health/liveness
# initialDelaySeconds: 60
# timeoutSeconds: 3
volumeMounts:
- name: plguindir
mountPath: /home/nacos/plugins/peer-finder
- name: datadir
mountPath: /home/nacos/data
- name: logdir
mountPath: /home/nacos/logs
volumeClaimTemplates:
- metadata:
name: plguindir
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
- metadata:
name: logdir
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
app: nacos
`

after kubectl -f nacos-pvs-nfs.yaml

the first nacos-0 is staring but not ready, (I close livenessProbe )
then I exec in that docker ,and the conf/cluster.conf is empty.
I rerun the bash plugins/peer-finder/plugin.sh then This docker is become OK, and conf/cluster.conf

nacos-1.nacos-headless.default.svc.cluster.local
nacos-2.nacos-headless.default.svc.cluster.local
But next POD nacos-1 have similar situation. I still need to rerun the command, bash plugins/peer-finder/plugin.sh, manually.

Nacos集群:curl: (6) Could not resolve host: GET; Unknown error

在k8s上使用nacaos集群服务注册功能:

springboot服务配置如下:

spring:
  application:
    name: nacos-client
  cloud:
    nacos:
      discovery:
        server-addr: nacos-0.nacos-headless.default.svc.cluster.local:8848

服务启动后,并没有注册成功,异常如下:

2019-07-18 15:27:21.562 ERROR 1 --- [TaskScheduler-1] o.s.c.a.nacos.discovery.NacosWatch       : Error watching Nacos Service change
 java.lang.IllegalStateException: failed to req API:/nacos/v1/ns/service/list after all servers([nacos-0.nacos-headless.default.svc.cluster.local:8848]) tried: failed to req API:http://nacos-0.nacos-headless.default.svc.cluster.local:8848/nacos/v1/ns/service/list. code:503 msg: server is STARTING now, please try again later!

觉得是nacos集群没有配置正确
于是在nacos集群某个节点容器中运行curl结果如下:

sh-4.2# curl GET "http://localhost:8848/nacos/v1/ns/raft/state"
curl: (6) Could not resolve host: GET; Unknown error

查看/etc/hosts:

# Kubernetes-managed hosts file.
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
fe00::0	ip6-mcastprefix
fe00::1	ip6-allnodes
fe00::2	ip6-allrouters
172.20.2.72	nacos-1.nacos-headless.default.svc.cluster.local.	nacos-1

没有发现异常。。
其他pod都是如此。
镜像用的目前的latest版本,部署方式是教程中的nfs可伸缩版。

不知道问题出在哪里,请教各位大神。

kubectl create -f deploy/nfs/rbac.yaml

MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/495f9f4f-695a-11e9-a037-52540036d517/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 172.17.79.3:/data/nfs-share /var/lib/kubelet/pods/495f9f4f-695a-11e9-a037-52540036d517/volumes/kubernetes.io~nfs/nfs-client-root Output: Running scope as unit run-10287.scope. mount: wrong fs type, bad option, bad superblock on 172.17.79.3:/data/nfs-share, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so.

Why the same profile will be different on different nodes?

Scene:

  1. I have a nacos-k8s cluster with 3 nodes
  2. nacos can access correctly
  3. I modified the config file in nacos configuration center

Question:
nacos-0
nacos-1
nacos-2

  1. As show in pic above,there are 3 difference on the nodes.
    (1) In same namespace have different group
    (2)In same namespace and same group have different config file.
    (3)In same namespace,group and config file have diffrent content.
  2. I modified the config file,but effective in nacos-2

Why the same profile will be different on different nodes?Is this a bug about nacos-k8s?
Thanks!

server is STARTING now, please try again later!

i use k8s to install nacos.but the services could't register to it.
it always show me server is STARTING now, please try again later!

i didn't use nfs ,just use nacos-quick-start.yaml

and i had saw the logs.

naming-raft.log

2019-04-18 16:42:27,816 INFO initializing Raft sub-system

2019-04-18 16:42:27,816 INFO finish loading all datums, size: 0 cost 0 ms.

2019-04-18 16:42:27,818 INFO cache loaded, datum count: 0, current term: 0

2019-04-18 16:42:27,818 INFO finish to load data from disk, cost: 2 ms.

2019-04-18 16:42:27,820 INFO raft notifier started

2019-04-18 16:42:27,827 INFO timer started: leader timeout ms: 15000, heart-beat timeout ms: 5000

2019-04-18 16:42:27,702 INFO server list is updated, new: 3 servers: [{"adWeight":0,"alive":false,"ip":"nacos-0.nacos-headless.default.svc.cluster.local","key":"nacos-0.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-1.nacos-headless.default.svc.cluster.local","key":"nacos-1.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-2.nacos-headless.default.svc.cluster.local","key":"nacos-2.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}]

2019-04-18 16:42:27,838 INFO raft peers changed: [{"adWeight":0,"alive":false,"ip":"nacos-0.nacos-headless.default.svc.cluster.local","key":"nacos-0.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-1.nacos-headless.default.svc.cluster.local","key":"nacos-1.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-2.nacos-headless.default.svc.cluster.local","key":"nacos-2.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}]

2019-04-18 16:42:27,907 INFO add listener: com.alibaba.nacos.naming.domains.meta.

2019-04-18 16:42:33,379 INFO add listener: com.alibaba.nacos.naming.domains.meta.00-00---000-NACOS_SWITCH_DOMAIN-000---00-00

2019-04-18 16:42:45,003 WARN [IS LEADER] no leader is available now!

2019-04-18 16:42:47,694 INFO raft peers changed: [{"adWeight":0,"alive":false,"ip":"nacos-0.nacos-headless.default.svc.cluster.local","key":"nacos-0.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-1.nacos-headless.default.svc.cluster.local","key":"nacos-1.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}, {"adWeight":0,"alive":false,"ip":"nacos-2.nacos-headless.default.svc.cluster.local","key":"nacos-2.nacos-headless.default.svc.cluster.local:8848","lastRefTime":0,"lastRefTimeStr":"","servePort":8848,"site":"unknown","weight":1}]

2019-04-18 16:43:00,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:43:15,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:43:30,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:43:45,001 WARN [IS LEADER] no leader is available now!

2019-04-18 16:44:00,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:44:15,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:44:30,001 WARN [IS LEADER] no leader is available now!

2019-04-18 16:44:45,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:45:00,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:45:15,000 WARN [IS LEADER] no leader is available now!

2019-04-18 16:45:30,001 WARN [IS LEADER] no leader is available now!

2019-04-18 16:45:45,000 WARN [IS LEADER] no leader is available now!

naming-ephemeral.log

2019-04-18 17:44:32,511 INFO waiting server list init...

2019-04-18 17:44:33,511 INFO waiting server list init...

2019-04-18 17:44:34,512 INFO waiting server list init...

2019-04-18 17:44:35,512 INFO waiting server list init...

2019-04-18 17:44:36,512 INFO waiting server list init...

2019-04-18 17:44:37,512 INFO waiting server list init...

2019-04-18 17:44:38,512 INFO waiting server list init...

2019-04-18 17:44:39,512 INFO waiting server list init...

2019-04-18 17:44:40,512 INFO waiting server list init...

2019-04-18 17:44:41,512 INFO waiting server list init...

2019-04-18 17:44:42,513 INFO waiting server list init...

2019-04-18 17:44:43,513 INFO waiting server list init...

2019-04-18 17:44:44,513 INFO waiting server list init...

2019-04-18 17:44:45,513 INFO waiting server list init...

2019-04-18 17:44:46,513 INFO waiting server list init...

2019-04-18 17:44:47,513 INFO waiting server list init...

配置中心可以访问,服务注册异常

老哥,又是我,周六去掉插件集群启动成功后,leader选举也成功
15:05:44,861 INFO received approve from peer: {"heartbeatDueMs":2000,"ip":"nacos-1.nacos-headless.zhike-cloud.svc.cluster.local:8848","leaderDueMs
":17230,"state":"FOLLOWER","term":10,"voteFor":"nacos-0.nacos-headless.zhike-cloud.svc.cluster.local:8848"}

2019-07-01 15:05:44,863 INFO nacos-0.nacos-headless.zhike-cloud.svc.cluster.local:8848 has become the LEADER

但是访问服务列表的时候报错,显示
curl http://nacos-headless:8848/nacos/v1/ns/service/list
server is DOWN now, please try again later!

其他两个follow 访问curl正常,导致服务注册一会健康一会不健康,请问又遇到吗,
leader的nacos.log 最后有一个异常
ERROR Responding with unauthorized error. Message - Full authentication is required to access this resource

重启pod需要删除原来的logs目录

image
有一个现象,当我删除pod重新创建的时候,查看nacos pod日志会报上面的错误。清空logs目录下的文件,启动就成功了。
nacos版本:1.0.1
k8s版本:1.13.5
使用nfs挂载的方式集群部署

Several issues running on EKS with RDS (Missing docs, configs)

I'm trying to get Nacos Server to run inside Kubernetes on the EKS Platform in AWS, in a custom Kubernetes Namespace. The DB is a RDS instance (initially tried with no replication, then setup replication as the docs indicate a slave is needed) - so not using the nacos-mysql-server and nacos-mysql-slave images

I used nacos:nacos-server:latest (1.0.0 jar) and the latest peer-finder (matches what was defined in the k8s manifests)

Starting with the manifests in this project, I ran into several issues:

  1. On startup, I saw an error from peer finder about missing $POD_NAMESPACE argument. So that was not defined in the manifest, I defined an env var to hold this value.

  2. The peer finder's plugin.sh does not add -ns argument passing $POD_NAMESPACE env var. It is also missing the argument to define the 'nacos-headless' service name.

Since the docker-startup.sh script does not allow manually setting the $CLUSTER_CONF file from the $NACOS_SERVER env vars if the peer-finder is mounted, I removed the 'plugin' volume mount to unblock that behaviour

  1. The configmap for Nacos does not have a value for the master host name, this causes nacos server to crash on startup. I defined one, and then added it o the StatefulSet manifest with the required MYSQL_MASTER_SERVICE_HOST env var after finding the error on the log

  2. MYSQL_SLAVE_SERVICE_HOST is also expected, and not defined. I created a read-replica from the master in RDS, and gave this to nacos.

  3. Nacos-server appears to attempt writing to the slave according to this error after I defined the slave host:


org.springframework.jdbc.UncategorizedSQLException: StatementCallback; uncategorized SQLException for SQL [DELETE FROM config_info WHERE data_id='com.alibaba.nacos.testMasterDB']; SQL state [HY000]; error code [1290]; The MySQL server is running with the --read-only opt
ion so it cannot execute this statement; nested exception is java.sql.SQLException: The MySQL server is running with the --read-only option so it cannot execute this statement
        at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:89)
        at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)

Best Practices Request: Debugging this was painful. You may consider sending start.out to STDOUT/STDERR so its easy to see what the error when using 'docker logs' or 'kubectl logs'

quick-startup.sh have some bug

can't use quick-startup.sh to run nacos cluster, because in that code
echo "nacos quick startup" kubectl create -f ./deploy/nacos/nacos-quick-start.yamlemptyDirs will possibly result in a loss of data

suppose to be
echo "nacos quick startup" kubectl create -f ./deploy/nacos/nacos-quick-start.yaml

k8s部署后,使用ingress对外暴露服务,客户端不能正常连接

ingress配置为域名比如:nacos.examples.com 指向nacos服务的8848端口
通过网页访问nacos.examples.com/nacos可以正常打开控制台
请问springcloud的客户端yml该如何配置才能连通nacos server呢,求指教?
spring:
cloud:
nacos:
discovery:
server-addr: ${NACOS-HOST:nacos.examples.com}:${NACOS-PORT:8848}
还是:
spring:
cloud:
nacos:
discovery:
server-addr: nacos.examples.com

实际使用发现以上方式都不能建立连接

unable to find local peer

java.lang.IllegalStateException: unable to find local peer: nacos-0.nacos-headless.default.svc.cluster.local.:8848, all peers: [nacos-0.nacos-headless.default.svc.cluster.local:8848, nacos-1.nacos-headless.default.svc.cluster.local:8848, nacos-2.nacos-headless.default.svc.cluster.local:8848]

1.1.3镜像 部署在k8s 1.14上,出这个错,请问如何解决

如何填写自己的数据库地址

deploy配置文件的configmap里面为啥没有mysql数据库地址的填写,我填写自己的数据库地址怎么填写。没有填写数据库地址你们是怎么来连数据库呢

nacos keeps starting after deploy on k8s

nacos deployment

---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
    - port: 8848
      name: server
      targetPort: 8848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
data:
  mysql.master.db.name: "nacos_devtest"
  mysql.master.port: "3306"
  mysql.slave.port: "3306"
  mysql.master.user: "nacos"
  mysql.master.password: "nacos"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
spec:
  serviceName: nacos-headless
  replicas: 2
  selector:
    matchLabels:
      app: nacos
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:latest
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: "/home/nacos/plugins/peer-finder"
              name: plguindir
      containers:
        - name: nacos
          imagePullPolicy: Always
          image: nacos/nacos-server:latest
          ports:
            - containerPort: 8848
              name: client-port
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_MASTER_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.master.db.name
            - name: MYSQL_MASTER_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.master.port
            - name: MYSQL_SLAVE_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.slave.port
            - name: MYSQL_MASTER_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.master.user
            - name: MYSQL_MASTER_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.master.password
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
          readinessProbe:
            httpGet:
              port: client-port
              path: /nacos/v1/console/health/readiness
            initialDelaySeconds: 600
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              port: client-port
              path: /nacos/v1/console/health/liveness
            initialDelaySeconds: 600
            timeoutSeconds: 3
          volumeMounts:
            - name: plguindir
              mountPath: /home/nacos/plugins/peer-finder
            - name: datadir
              mountPath: /home/nacos/data
            - name: logdir
              mountPath: /home/nacos/logs
  volumeClaimTemplates:
    - metadata:
        name: plguindir
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 5Gi
    - metadata:
        name: datadir
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 5Gi
    - metadata:
        name: logdir
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 5Gi

checking the nacos's po

[root@t-inf-dev02 nacos]# kubectl get po -l app=nacos -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
nacos-0   0/1       Running   4          53m       10.233.84.174   t-inf-dev01

nacos keeps starting, but never started.

nacos's logs

[root@nacos-0 logs]# tail -f nacos.log
.....
2019-02-14 11:07:56,720 INFO Bean 'org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration' of type [org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration$$EnhancerBySpringCGLIB$$23ff4ee7] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

2019-02-14 11:07:57,054 INFO Bean 'objectPostProcessor' of type [org.springframework.security.config.annotation.configuration.AutowireBeanFactoryObjectPostProcessor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

2019-02-14 11:07:57,062 INFO Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@38f57b3d' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

2019-02-14 11:07:57,074 INFO Bean 'org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$48d3f199] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

2019-02-14 11:07:57,088 INFO Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

2019-02-14 11:07:57,682 INFO Nacos is starting...

2019-02-14 11:07:58,260 INFO Tomcat initialized with port(s): 8848 (http)

2019-02-14 11:07:58,330 INFO Initializing ProtocolHandler ["http-nio-8848"]

2019-02-14 11:07:58,370 INFO Starting service [Tomcat]

2019-02-14 11:07:58,370 INFO Starting Servlet Engine: Apache Tomcat/9.0.13

2019-02-14 11:07:58,406 INFO The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib]

2019-02-14 11:07:58,627 INFO Initializing Spring embedded WebApplicationContext

2019-02-14 11:07:58,628 INFO Root WebApplicationContext: initialization completed in 7816 ms

2019-02-14 11:07:58,682 INFO Nacos is starting...

2019-02-14 11:07:59,684 INFO Nacos is starting...

2019-02-14 11:08:00,691 INFO Nacos is starting...

any idea? @paderlol

unable to find local peer

@paderlol nacos部署启动后报错,k8s已经换为1.12.2了
日志:

java.lang.IllegalStateException: unable to find local peer: nacos-0.nacos-headless.default.svc.cluster.local.:8848, all peers: [nacos-0.nacos-headless.default.svc.cluster.local:8848, nacos-1.nacos-headless.default.svc.cluster.local:8848]
        at com.alibaba.nacos.naming.raft.PeerSet.local(PeerSet.java:191)
        at com.alibaba.nacos.naming.monitor.PerformanceLoggerThread.collectmetrics(PerformanceLoggerThread.java:114)
        at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
        at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
        at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

相关信息:

NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.130   Ready,SchedulingDisabled   master   28m   v1.12.2
192.168.1.131   Ready                      node     28m   v1.12.2
192.168.1.132   Ready                      node     28m   v1.12.2
NAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE            NOMINATED NODE
pod/mysql-master-m4j8r                        1/1     Running   0          20m   172.20.1.5   192.168.1.131   <none>
pod/mysql-slave-xz67m                         1/1     Running   0          20m   172.20.2.5   192.168.1.132   <none>
pod/nacos-0                                   1/1     Running   0          19m   172.20.1.6   192.168.1.131   <none>
pod/nacos-1                                   1/1     Running   0          19m   172.20.2.6   192.168.1.132   <none>
pod/nfs-client-provisioner-659fdbfdbb-ph6r6   1/1     Running   0          22m   172.20.2.4   192.168.1.132   <none>

NAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                            SELECTOR
replicationcontroller/mysql-master   1         1         1       20m   master       nacos/nacos-mysql-master:latest   name=mysql-master
replicationcontroller/mysql-slave    1         1         1       20m   slave        nacos/nacos-mysql-slave:latest    name=mysql-slave

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/kubernetes       ClusterIP   10.68.0.1      <none>        443/TCP    33m   <none>
service/mysql-master     ClusterIP   10.68.72.159   <none>        3306/TCP   20m   name=mysql-master
service/mysql-slave      ClusterIP   10.68.125.87   <none>        3306/TCP   20m   name=mysql-slave
service/nacos-headless   ClusterIP   None           <none>        8848/TCP   19m   app=nacos

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS               IMAGES                                                   SELECTOR
deployment.apps/nfs-client-provisioner   1         1         1            1           22m   nfs-client-provisioner   quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner

NAME                                                DESIRED   CURRENT   READY   AGE   CONTAINERS               IMAGES                                                   SELECTOR
replicaset.apps/nfs-client-provisioner-659fdbfdbb   1         1         1       22m   nfs-client-provisioner   quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner,pod-template-hash=659fdbfdbb

NAME                     DESIRED   CURRENT   AGE   CONTAINERS   IMAGES
statefulset.apps/nacos   2         2         19m   nacos        registry-vpc.cn-shanghai.aliyuncs.com/yqcloud-tools/nacos-server:latest

unable to find local peer

nacos启动后报错,已经换成k8s v1.12.2版本:

NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.130   Ready,SchedulingDisabled   master   28m   v1.12.2
192.168.1.131   Ready                      node     28m   v1.12.2
192.168.1.132   Ready                      node     28m   v1.12.2

日志:

java.lang.IllegalStateException: unable to find local peer: nacos-0.nacos-headless.default.svc.cluster.local.:8848, all peers: [nacos-0.nacos-headless.default.svc.cluster.local:8848, nacos-1.nacos-headless.default.svc.cluster.local:8848]
        at com.alibaba.nacos.naming.raft.PeerSet.local(PeerSet.java:191)
        at com.alibaba.nacos.naming.monitor.PerformanceLoggerThread.collectmetrics(PerformanceLoggerThread.java:114)
        at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
        at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
        at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)  
NAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE            NOMINATED NODE
pod/mysql-master-m4j8r                        1/1     Running   0          20m   172.20.1.5   192.168.1.131   <none>
pod/mysql-slave-xz67m                         1/1     Running   0          20m   172.20.2.5   192.168.1.132   <none>
pod/nacos-0                                   1/1     Running   0          19m   172.20.1.6   192.168.1.131   <none>
pod/nacos-1                                   1/1     Running   0          19m   172.20.2.6   192.168.1.132   <none>
pod/nfs-client-provisioner-659fdbfdbb-ph6r6   1/1     Running   0          22m   172.20.2.4   192.168.1.132   <none>

NAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                            SELECTOR
replicationcontroller/mysql-master   1         1         1       20m   master       nacos/nacos-mysql-master:latest   name=mysql-master
replicationcontroller/mysql-slave    1         1         1       20m   slave        nacos/nacos-mysql-slave:latest    name=mysql-slave

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/kubernetes       ClusterIP   10.68.0.1      <none>        443/TCP    33m   <none>
service/mysql-master     ClusterIP   10.68.72.159   <none>        3306/TCP   20m   name=mysql-master
service/mysql-slave      ClusterIP   10.68.125.87   <none>        3306/TCP   20m   name=mysql-slave
service/nacos-headless   ClusterIP   None           <none>        8848/TCP   19m   app=nacos

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS               IMAGES                                                   SELECTOR
deployment.apps/nfs-client-provisioner   1         1         1            1           22m   nfs-client-provisioner   quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner

NAME                                                DESIRED   CURRENT   READY   AGE   CONTAINERS               IMAGES                                                   SELECTOR
replicaset.apps/nfs-client-provisioner-659fdbfdbb   1         1         1       22m   nfs-client-provisioner   quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner,pod-template-hash=659fdbfdbb

NAME                     DESIRED   CURRENT   AGE   CONTAINERS   IMAGES
statefulset.apps/nacos   2         2         19m   nacos        registry-vpc.cn-shanghai.aliyuncs.com/yqcloud-tools/nacos-server:latest

java.lang.IllegalStateException: unable to find local peer

java.lang.IllegalStateException: unable to find local peer: nacos-0.nacos-headless.default.svc.cluster.local.:8848, all peers: [nacos-0.nacos-headless.default.svc.cluster.local:8848, nacos-1.nacos-headless.default.svc.cluster.local:8848, nacos-2.nacos-headless.default.svc.cluster.local:8848]
at com.alibaba.nacos.naming.consistency.persistent.raft.RaftPeerSet.local(RaftPeerSet.java:211)
at com.alibaba.nacos.naming.monitor.PerformanceLoggerThread.collectmetrics(PerformanceLoggerThread.java:123)
at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
k8s部署遇到这个问题解决不了

server is STARTING now, please try again later!

Scene

  1. Nacos-K8S deployed in Aliyun K8S cluster
  2. Use k8s Ingress type expose the nacos,and can access through domain
  3. Get ready the config file through nacos console
  4. I can get the config file through ’curl -X GET "http://cluster-ip:8848/nacos/v1/cs/configs?dataId=nacos.cfg.dataId&group=test"‘,and get config file in Spring Cloud project.

Question:
I can't register the service,regadless of in Spring Cloud project alse through 'curl -X PUT 'http://cluster-ip:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.10&port=8080'' 。And in spring cloud return failed to req API:/nacos/v1/ns/instance after all servers([cluster-ip:8848]) tried ,and return the result server is STARTING now, please try again later! through 'curl -X PUT 'http://cluster-ip:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.10&port=8080'' .

How could I do to solve the problem?Thanks!

How to deploy in openshift 3.11

following the Readme.md, I clone the source code ,and run ./quick-startup.sh on my master node of my openshift cluster , but the pod seems to be cannt be started :

oc get po
NAME READY STATUS RESTARTS AGE
docker-registry-1-rp5s4 1/1 Running 689 76d
nacos-0 0/1 CrashLoopBackOff 10 31m
nacos-1 0/1 CrashLoopBackOff 10 30m
nacos-2 0/1 CrashLoopBackOff 10 30m
registry-console-1-g5wfz 1/1 Running 649 76d
router-1-69sjb 1/1 Running 32 76d

the nacos pod is all CrashLoopBackOff as show above.

And the log said "Permission denied" :

...

ERROR in ch.qos.logback.core.rolling.RollingFileAppender[naming-rt] - openFile(/home/nacos/logs/naming-rt.log,true) call failed. java.io.FileNotFoundException: /home/nacos/logs/naming-rt.log (Permission denied)
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[startLog] - openFile(/home/nacos/logs/config-server.log,true) call failed. java.io.FileNotFoundException: /home/nacos/logs/config-server.log (Permission denied)
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[rootFile] - openFile(/home/nacos/logs/nacos.log,true) call failed. java.io.FileNotFoundException: /home/nacos/logs/nacos.log (Permission denied)
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[nacos-address] - openFile(/home/nacos/logs/nacos-address.log,true) call failed. java.io.FileNotFoundException: /home/nacos/logs/nacos-address.log (Permission denied)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)

....

image

how can i do with this ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.