radondb / radondb-mysql-kubernetes Goto Github PK
View Code? Open in Web Editor NEWOpen Source,High Availability Cluster,based on MySQL
License: Apache License 2.0
Open Source,High Availability Cluster,based on MySQL
License: Apache License 2.0
kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-krypton-0 2/3 CrashLoopBackOff 6 13m
Initializing database
2021-03-16T17:33:55.774963+08:00 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2021-03-16T17:33:55.776426+08:00 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2021-03-16T17:33:55.776454+08:00 0 [ERROR] Aborting
Leader switch When xenon on slave node quit
It is noise:
test-krypton-1: The old leader ->the new slave
2021/04/07 11:39:21.222243 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }
2021/04/07 11:39:21.222424 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:22.973750 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.to.peer[test-krypton-0.test-krypton.default.svc.cluster.local:8801].new.client.error[dial tcp: lookup test-krypton-0.test-krypton.default.svc.cluster.local: no such host]
2021/04/07 11:39:22.973784 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.get.rsp[N:, V:0, E:0].error[ErrorRpcCall]
2021/04/07 11:39:24.974169 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.to.peer[test-krypton-0.test-krypton.default.svc.cluster.local:8801].new.client.error[dial tcp: lookup test-krypton-0.test-krypton.default.svc.cluster.local: no such host]
2021/04/07 11:39:24.974207 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.get.rsp[N:, V:0, E:0].error[ErrorRpcCall]
2021/04/07 11:39:25.227313 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].get.voterequest.from[{Raft:{EpochID:2 ViewID:12 Leader: From:test-krypton-0.test-krypton.default.svc.cluster.local:8801 To:test-krypton-1.test-krypton.default.svc.cluster.local:8801 State:} Repl:{Master_Host: Master_Port:0 Repl_User: Repl_Password:} GTID:{Master_Log_File:mysql-bin.000002 Read_Master_Log_Pos:154 Relay_Master_Log_File: Slave_IO_Running:false Slave_IO_Running_Str:No Slave_SQL_Running:true Slave_SQL_Running_Str:Yes Retrieved_GTID_Set: Executed_GTID_Set: Seconds_Behind_Master: Slave_SQL_Running_State:Slave has read all relay log; waiting for more updates Last_Error: Last_IO_Error: Last_SQL_Error:} Peers:[] IdlePeers:[]}]
2021/04/07 11:39:25.227571 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }
2021/04/07 11:39:25.228121 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:25.228151 api.go:104: [WARNING] mysql.gtid.compare.this[{mysql-bin.000002 154 true true 0 }].from[&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]
2021/04/07 11:39:25.228165 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].get.requestvote.from[N:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].degrade.to.follower
2021/04/07 11:39:25.228172 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].do.updateViewID[FROM:11 TO:12]
2021/04/07 11:39:25.228177 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].degrade.to.follower.stop.the.vip...
2021/04/07 11:39:25.334647 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStopShellCommand[[-c /scripts/leader-stop.sh]].done
2021/04/07 11:39:25.334685 trace.go:27: [INFO] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.semi-sync.thread.stop...
2021/04/07 11:39:25.334692 trace.go:27: [INFO] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.gtid.thread.stop...
2021/04/07 11:39:25.334712 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leader.state.machine.exit.done
2021/04/07 11:39:25.334719 trace.go:27: [INFO] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].start.CheckBrainSplit
2021/04/07 11:39:25.334739 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.init
2021/04/07 11:39:25.510949 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStopShellCommand[[-c /scripts/leader-stop.sh]].done
2021/04/07 11:39:25.510996 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.waitMysqlDoneAsync.prepare
2021/04/07 11:39:25.511017 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.machine.run
2021/04/07 11:39:26.228289 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetReadOnly.done
2021/04/07 11:39:26.228474 api.go:376: [ERROR] mysql[localhost:3306].SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period'].var[SET GLOBAL tokudb_fsync_log_period=1000]
2021/04/07 11:39:26.228749 api.go:379: [WARNING] mysql[localhost:3306].SetSlaveGlobalSysVar[tokudb_fsync_log_period=1000;sync_binlog=1000;innodb_flush_log_at_trx_commit=1]
2021/04/07 11:39:26.228774 trace.go:37: [ERROR] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period']
2021/04/07 11:39:26.228782 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSlaveGlobalSysVar.done
2021/04/07 11:39:26.228789 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepareAsync.done
2021/04/07 11:39:26.245368 trace.go:37: [ERROR] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.StartSlave.error[Error 1200: The server is not configured as slave; fix in config file or with CHANGE MASTER TO]
2021/04/07 11:39:26.245396 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }
2021/04/07 11:39:26.245575 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:26.245593 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].init.my.gtid.is:{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:26.245619 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }
2021/04/07 11:39:26.308777 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:26.308839 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.my.gtid.is:{mysql-bin.000002 154 true true 0 }
2021/04/07 11:39:26.309177 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.from[N:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].change.mysql.master
2021/04/07 11:39:26.345505 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.change.to.the.new.master[test-krypton-0.test-krypton.default.svc.cluster.local:8801].successed
test-krypton-0: the old slave ->the new leader
2021/04/07 11:39:20.006492 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].ping.responses[1].is.less.than.half.maybe.brain.split
2021/04/07 11:39:20.009268 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].ping.responses[2].is.greater.than.half.again
2021/04/07 11:39:25.218234 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].timeout.to.do.new.election
2021/04/07 11:39:25.218766 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }
2021/04/07 11:39:25.218785 api.go:56: [WARNING] mysql[localhost:3306].Promotable.sql_thread[true]
2021/04/07 11:39:25.218793 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].timeout.and.ping.almost.node.successed.promote.to.candidate
2021/04/07 11:39:25.219031 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].follower.state.machine.exit
2021/04/07 11:39:25.219056 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].state.machine.run
2021/04/07 11:39:25.219098 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepare.send.requestvote.to[test-krypton-2.test-krypton.default.svc.cluster.local:8801]
2021/04/07 11:39:25.219192 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepare.send.requestvote.to[test-krypton-1.test-krypton.default.svc.cluster.local:8801]
2021/04/07 11:39:25.219569 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }
2021/04/07 11:39:25.219621 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.to.peer[test-krypton-2.test-krypton.default.svc.cluster.local:8801].request.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]
2021/04/07 11:39:25.219706 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }
2021/04/07 11:39:25.219726 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.to.peer[test-krypton-1.test-krypton.default.svc.cluster.local:8801].request.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]
2021/04/07 11:39:25.226070 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.done.to[test-krypton-2.test-krypton.default.svc.cluster.local:8801]
2021/04/07 11:39:25.332622 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.done.to[test-krypton-1.test-krypton.default.svc.cluster.local:8801]
2021/04/07 11:39:25.332677 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-2.test-krypton.default.svc.cluster.local:8801, R:FOLLOWER].rsp.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }].retcode[OK]
2021/04/07 11:39:25.332690 trace.go:27: [INFO] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-2.test-krypton.default.svc.cluster.local:8801, V:11].ok.votegranted[2].majoyrity[2]
2021/04/07 11:39:25.332705 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-1.test-krypton.default.svc.cluster.local:8801, R:LEADER].rsp.gtid[{mysql-bin.000002 154 true true 0 }].retcode[OK]
2021/04/07 11:39:25.332725 trace.go:27: [INFO] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11].ok.votegranted[3].majoyrity[2]
2021/04/07 11:39:25.332733 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].grants.unanimous.votes[3]/members[3].become.leader
2021/04/07 11:39:25.332746 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].candidate.state.machine.exit
2021/04/07 11:39:25.332755 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.init
2021/04/07 11:39:25.332769 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].purge.binlog.start[300000ms]...
2021/04/07 11:39:25.332779 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.semi-sync.thread.start[5000ms]...
2021/04/07 11:39:25.332787 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.gtid.thread.start[5000ms]...
2021/04/07 11:39:25.332794 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].async.setting.prepare....
2021/04/07 11:39:25.332829 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.machine.run
2021/04/07 11:39:25.334038 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }
2021/04/07 11:39:25.334064 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].my.gtid.is:{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }
2021/04/07 11:39:25.334074 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].1. mysql.WaitUntilAfterGTID.prepare
2021/04/07 11:39:25.392320 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.WaitUntilAfterGTID.done
2021/04/07 11:39:25.392348 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].2. mysql.ChangeToMaster.prepare
2021/04/07 11:39:25.412773 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.ChangeToMaster.done
2021/04/07 11:39:25.412829 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].3. mysql.EnableSemiSyncMaster.prepare
2021/04/07 11:39:25.413856 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.EnableSemiSyncMaster.done
2021/04/07 11:39:25.413868 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].4.mysql.SetSysVars.prepare
2021/04/07 11:39:25.414082 api.go:356: [ERROR] mysql[localhost:3306].SetMasterGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period'].var[SET GLOBAL tokudb_fsync_log_period=default]
2021/04/07 11:39:25.414338 api.go:359: [WARNING] mysql[localhost:3306].SetMasterGlobalSysVar[tokudb_fsync_log_period=default;sync_binlog=default;innodb_flush_log_at_trx_commit=default]
2021/04/07 11:39:25.414348 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSysVars.done
2021/04/07 11:39:25.414354 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].5. mysql.SetReadWrite.prepare
2021/04/07 11:39:25.414572 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetReadWrite.done
2021/04/07 11:39:25.414585 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].6. start.vip.prepare
2021/04/07 11:39:25.531878 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStartShellCommand[[-c /scripts/leader-start.sh]].done
2021/04/07 11:39:25.531905 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].start.vip.done
2021/04/07 11:39:25.531920 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].async.setting.all.done....
2021/04/07 11:39:30.333380 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }
2021/04/07 11:39:30.333841 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }
Refactor README.md
Is your feature request related to a problem? Please describe.
I want to customize and standardize the information I'd like contributors to include when they open issues and pull requests in radondb-mysql-kubernetes.
Describe the solution you'd like
Refer to the github article, add pull request and issue templates for radondb-mysql-kubernetes.
Describe alternatives you've considered
N/A
Additional context
N/A
add sidercar to support view mysql slow log
Is your feature request related to a problem? Please describe.
Add status api to support update the cluster status
Describe the solution you'd like
The cluster status can be updated in time.
Describe alternatives you've considered
Additional context
root@i-2xctojb9:~/krypton-helm# kubectl get pv |grep krypton
pvc-61bf00bd-0395-4c0d-ab2a-51dd612c496e 10Gi RWO Delete Terminating default/data-test-krypton-2 csi-standard 7d2h
root@i-2xctojb9:~/krypton-helm#kubectl delete pv pvc-61bf00bd-0395-4c0d-ab2a-51dd612c496e --grace-period=0 --force
1min
5min
...
Hang
describe:
Install the cluster by using the existing pvc, the slave has the error Slave failed to initialize relay log info structure from the repository
.
process:
# use pvc
helm install test
# after success, do CRUD on MySQL. Then uninstall the cluster and retain the pvc.
helm delete test
# recreate test
helm install test
reason:
The binlog and relay log aren't mounted.
Change the key word xenondb to randondb-mysql
Describe the problem
make failed when go 1.16.6
To Reproduce
#make vet
go vet ./...
# github.com/radondb/radondb-mysql-kubernetes/utils
utils/unsafe.go:33:35: possible misuse of reflect.StringHeader
utils/unsafe.go:45:35: possible misuse of reflect.SliceHeader
make: *** [vet] Error 2
Expected behavior
#make vet
go vet ./...
Environment:
2021/04/13 02:02:26.712090 api.go:379: [WARNING] mysql[localhost:3306].SetSlaveGlobalSysVar[tokudb_fsync_log_period=1000;sync_binlog=1000;innodb_flush_log_at_trx_commit=1]
2021/04/13 02:02:26.712117 trace.go:37: [ERROR] FOLLOWER[ID:kr-test-kryp-0.kr-test-kryp.krypton-deploy.svc.cluster.local:8801, V:0, E:0].mysql.SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period']
If initTokudb is false, the cluster will not install tokudb engine, the variable tokudb_fsync_log_period
cannot be recognized.
Describe the problem
innodb_buffer_pool_size is one of MysqlConf`s keys.
type MysqlConf map[string]intstr.IntOrString
The type of innodbBufferPoolSize is int32.
type IntOrString struct {
Type Type `protobuf:"varint,1,opt,name=type,casttype=Type"`
IntVal int32 `protobuf:"varint,2,opt,name=intVal"`
StrVal string `protobuf:"bytes,3,opt,name=strVal"`
}
func (c *Cluster) EnsureMysqlConf() {
...
c.Spec.MysqlOpts.MysqlConf["innodb_buffer_pool_size"] = intstr.FromInt(int(innodbBufferPoolSize))
c.Spec.MysqlOpts.MysqlConf["innodb_buffer_pool_instances"] = intstr.FromInt(int(instances))
}
func FromInt(val int) IntOrString {
if val > math.MaxInt32 || val < math.MinInt32 {
klog.Errorf("value: %d overflows int32\n%s\n", val, debug.Stack())
}
return IntOrString{Type: Int, IntVal: int32(val)}
}
In fact, innodbBufferPoolSize is likely to overflows int32.
To Reproduce
Expected behavior
Use other data types to store InnodBufferPoolsize or change the unit of memory.
Environment:
add table content:
# **在 Kubesphere 上部署 RadonDB MySQL 集群**
## **简介**
RadonDB MySQL 是基于 MySQL 的开源、高可用、云原生集群解决方案。通过使用 Raft 协议,RadonDB MySQL 可以快速进行故障转移,且不会丢失任何事务。
## **部署准备**
### **安装 KubeSphere**
可选择如下安装方式:
...
changed to
* [<strong>在 Kubesphere 上部署 RadonDB MySQL 集群</strong>](#在-kubesphere-上部署-radondb-mysql-集群)
* [<strong>简介</strong>](#简介)
* [<strong>部署准备</strong>](#部署准备)
* [<strong>安装 KubeSphere</strong>](#安装-kubesphere)
# **在 Kubesphere 上部署 RadonDB MySQL 集群**
## **简介**
RadonDB MySQL 是基于 MySQL 的开源、高可用、云原生集群解决方案。通过使用 Raft 协议,RadonDB MySQL 可以快速进行故障转移,且不会丢失任何事务。
## **部署准备**
### **安装 KubeSphere**
可选择如下安装方式:
Describe the problem
Failed to build manager docker image:
[+] Building 3.2s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 980B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/golang:1.16 1.2s
=> ERROR [internal] load metadata for gcr.azk8s.cn/distroless/static:nonroot 3.1s
------
> [internal] load metadata for gcr.azk8s.cn/distroless/static:nonroot:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests nonroot]: 403 Forbidden
see:
https://kubesphere.com.cn/forum/d/1084-gcr-azk8s-cn
https://github.com/GoogleContainerTools/distroless
In Dockerfile, we pull image from gcr.azk8s.cn:
24 # Use distroless as minimal base image to package the manager binary
25 # Refer to https://github.com/GoogleContainerTools/distroless for more details
26 FROM gcr.azk8s.cn/distroless/static:nonroot
To Reproduce
Expected behavior
Environment:
Is your feature request related to a problem? Please describe.
Add workflow
Add Travis CI
Describe the solution you'd like
Describe alternatives you've considered
Additional context
Description of the problem:
After connecting to the master node, the execution of the sql is not responsive(such as creating table), forced exit MySQL re-entry found that the sql has been successfully executed at the master node, but slave node is not synchronized.
How to deploy:
Upload the template to the KubeSphere console deployment.
Deployment parameters:
Default.
Other configurations:
The project name is krypton-deploy and the release name is kryptondb.
Readiness probe failed: runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f01c023b18b m=0 sigcode=18446744073709551610
goroutine 0 [idle]:
runtime: unknown pc 0x7f01c023b18b
stack: frame={sp:0x7fff3a945c00, fp:0x0} stack=[0x7fff3a147278,0x7fff3a9462b0)
00007fff3a945b00: 00007f01c043d190 00007f01c043d4f8
00007fff3a945b10: 0000000000000000 00007fff3a945b64
00007fff3a945b20: 00007fff00000000 00007fff00000001
00007fff3a945b30: 0000000000679694 <crypto/tls.(*serverHelloMsg).marshal.func1.2.4.1+116> 00007fff3a945c20
00007fff3a945b40: 00007f01c01ff860 00007f01c040a500
00007fff3a945b50: 0000000000000004 0000000000000004
00007fff3a945b60: 000000000022be0e 0000000000000000
00007fff3a945b70: 0000000000000000 000000008ff9ac08
00007fff3a945b80: 00007f01c043d4f8 0000000000c55078
00007fff3a945b90: 00000000009a111a 0000000002c04c10
00007fff3a945ba0: 0000000000000000 000000000098bd64
00007fff3a945bb0: 0000000000000000 00007f01c041f187
00007fff3a945bc0: 0000000000000005 0000000000000000
00007fff3a945bd0: 0000000000000001 00007f01c01ff860
00007fff3a945be0: 00007fff3a945e30 00007f01c0426aa7
00007fff3a945bf0: 000000000000000a 00007f01c030621f
00007fff3a945c00: <0000000000000000 00007f01c03e1643
00007fff3a945c10: 00007f01c03e34b0 000000000000000a
00007fff3a945c20: 0000000000000037 0000000000a28e40
00007fff3a945c30: 000000000000037f 0000000000000000
00007fff3a945c40: 0000000000000000 0000ffff00001fa0
00007fff3a945c50: 0000000000000000 0000000000000000
00007fff3a945c60: 0000000000000000 0000000000000000
00007fff3a945c70: 0000000000000000 0000000000000000
00007fff3a945c80: fffffffe7fffffff ffffffffffffffff
00007fff3a945c90: ffffffffffffffff ffffffffffffffff
00007fff3a945ca0: ffffffffffffffff ffffffffffffffff
00007fff3a945cb0: ffffffffffffffff ffffffffffffffff
00007fff3a945cc0: ffffffffffffffff ffffffffffffffff
00007fff3a945cd0: ffffffffffffffff ffffffffffffffff
00007fff3a945ce0: ffffffffffffffff ffffffffffffffff
00007fff3a945cf0: ffffffffffffffff ffffffffffffffff
runtime: unknown pc 0x7f01c023b18b
stack: frame={sp:0x7fff3a945c00, fp:0x0} stack=[0x7fff3a147278,0x7fff3a9462b0)
00007fff3a945b00: 00007f01c043d190 00007f01c043d4f8
00007fff3a945b10: 0000000000000000 00007fff3a945b64
00007fff3a945b20: 00007fff00000000 00007fff00000001
00007fff3a945b30: 0000000000679694 <crypto/tls.(*serverHelloMsg).marshal.func1.2.4.1+116> 00007fff3a945c20
00007fff3a945b40: 00007f01c01ff860 00007f01c040a500
00007fff3a945b50: 0000000000000004 0000000000000004
00007fff3a945b60: 000000000022be0e 0000000000000000
00007fff3a945b70: 0000000000000000 000000008ff9ac08
00007fff3a945b80: 00007f01c043d4f8 0000000000c55078
00007fff3a945b90: 00000000009a111a 0000000002c04c10
00007fff3a945ba0: 0000000000000000 000000000098bd64
00007fff3a945bb0: 0000000000000000 00007f01c041f187
00007fff3a945bc0: 0000000000000005 0000000000000000
00007fff3a945bd0: 0000000000000001 00007f01c01ff860
00007fff3a945be0: 00007fff3a945e30 00007f01c0426aa7
00007fff3a945bf0: 000000000000000a 00007f01c030621f
00007fff3a945c00: <0000000000000000 00007f01c03e1643
00007fff3a945c10: 00007f01c03e34b0 000000000000000a
00007fff3a945c20: 0000000000000037 0000000000a28e40
00007fff3a945c30: 000000000000037f 0000000000000000
00007fff3a945c40: 0000000000000000 0000ffff00001fa0
00007fff3a945c50: 0000000000000000 0000000000000000
00007fff3a945c60: 0000000000000000 0000000000000000
00007fff3a945c70: 0000000000000000 0000000000000000
00007fff3a945c80: fffffffe7fffffff ffffffffffffffff
00007fff3a945c90: ffffffffffffffff ffffffffffffffff
00007fff3a945ca0: ffffffffffffffff ffffffffffffffff
00007fff3a945cb0: ffffffffffffffff ffffffffffffffff
00007fff3a945cc0: ffffffffffffffff ffffffffffffffff
00007fff3a945cd0: ffffffffffffffff ffffffffffffffff
00007fff3a945ce0: ffffffffffffffff ffffffffffffffff
00007fff3a945cf0: ffffffffffffffff ffffffffffffffff
goroutine 1 [runnable, locked to thread]:
text/template/parse.(*lexer).nextItem(...)
/usr/local/go/src/text/template/parse/lex.go:195
text/template/parse.(*Tree).next(...)
/usr/local/go/src/text/template/parse/parse.go:64
text/template/parse.(*Tree).peekNonSpace(0xc000106100, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/text/template/parse/parse.go:113 +0x154
text/template/parse.(*Tree).itemList(0xc000106100, 0x8f65e4, 0x5, 0xc00006e0c0)
/usr/local/go/src/text/template/parse/parse.go:326 +0xbe
text/template/parse.(*Tree).parseControl(0xc000106100, 0x915700, 0x8f65e4, 0x5, 0x0, 0x0, 0xc00006e0c0, 0x0, 0x0)
/usr/local/go/src/text/template/parse/parse.go:459 +0x100
text/template/parse.(*Tree).rangeControl(0xc000106100, 0x1d, 0x9c)
/usr/local/go/src/text/template/parse/parse.go:501 +0x4c
text/template/parse.(*Tree).action(0xc000106100, 0xa, 0x9a)
/usr/local/go/src/text/template/parse/parse.go:368 +0x4d7
text/template/parse.(*Tree).textOrAction(0xc000106100, 0xa, 0x9a)
/usr/local/go/src/text/template/parse/parse.go:345 +0x293
text/template/parse.(*Tree).itemList(0xc000106100, 0x8f65e4, 0x5, 0xc00006e060)
/usr/local/go/src/text/template/parse/parse.go:327 +0xf9
text/template/parse.(*Tree).parseControl(0xc000106100, 0x0, 0x8f65e4, 0x5, 0x0, 0x0, 0xc00006e060, 0x0, 0x0)
/usr/local/go/src/text/template/parse/parse.go:459 +0x100
text/template/parse.(*Tree).rangeControl(0xc000106100, 0x1d, 0x2b)
/usr/local/go/src/text/template/parse/parse.go:501 +0x4c
text/template/parse.(*Tree).action(0xc000106100, 0xa, 0x29)
/usr/local/go/src/text/template/parse/parse.go:368 +0x4d7
text/template/parse.(*Tree).textOrAction(0xc000106100, 0x1d, 0x2b)
/usr/local/go/src/text/template/parse/parse.go:345 +0x293
text/template/parse.(*Tree).parse(0xc000106100)
/usr/local/go/src/text/template/parse/parse.go:291 +0x381
text/template/parse.(*Tree).Parse(0xc000106100, 0x91573d, 0x172, 0x0, 0x0, 0x0, 0x0, 0xc000093a70, 0xc000091d40, 0x2, ...)
/usr/local/go/src/text/template/parse/parse.go:230 +0x247
text/template/parse.Parse(0x8f95bb, 0x9, 0x91573d, 0x172, 0x0, 0x0, 0x0, 0x0, 0xc000091d40, 0x2, ...)
/usr/local/go/src/text/template/parse/parse.go:55 +0x122
text/template.(*Template).Parse(0xc0000cf640, 0x91573d, 0x172, 0xc0000b2140, 0xc0000cf680, 0xc000093860)
/usr/local/go/src/text/template/template.go:200 +0x111
html/template.(*Template).Parse(0xc000093a40, 0x91573d, 0x172, 0x0, 0x0, 0x0)
/usr/local/go/src/html/template/template.go:189 +0x99
net/rpc.init()
/usr/local/go/src/net/rpc/debug.go:39 +0x9b
Is your feature request related to a problem? Please describe.
todo:
deploy_radondb-mysql_on_kubernetes
Describe the solution you'd like
charts -> charts/helm
Describe alternatives you've considered
Additional context
When replicaCount is 1:
1. Remove krypton container.
2. Set global read_only=off.
3. The label 'role' is master.
4. Remove slave service.
Is your feature request related to a problem? Please describe.
we need xtrabackup to cloud for the cluster
Describe the solution you'd like
Make a backup like this:
kubectl apply -f mysql_v1alpha1_backup.yaml
To backup the database data to S3 storage , or Persisten volume
Then could use the backup copy to create cluster , add RestoreFrom feild in mysql_v1alpha1_cluster.yaml
kubectl apply -f mysql_v1alpha1_cluster.yaml
create the cluster from backup copy
Describe alternatives you've considered
Additional context
master->leader
slave->follower
tt-krypton-master NodePort 10.96.42.194 <none> 3306:32370/TCP 102m
tt-krypton-metrics ClusterIP None <none> 9104/TCP 102m
tt-krypton-slave NodePort 10.96.21.243 <none> 3306:30545/TCP 102m
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
for example:
pod0,pod1, pod2,
if pod2
stop, before xenon container stop, run xenoncli cluster remove pod2:8801
in pod0
and pod1
.
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
We need use the latest version of kubebuilder v3.1.0
Describe the solution you'd like
None
Describe alternatives you've considered
Additional context
Change all words krypton
to xenondb
or xenon
1、准备
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 prepare
2、运行
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 run
max\u prepared\u stmt\u count个语句(当前值:16382)会报错
FATAL: mysql_stmt_prepare() failed
FATAL: MySQL error: 1461 "Can't create more than max_prepared_stmt_count statements (current value: 16382)"
FATAL: `thread_init' function failed: /usr/share/sysbench/oltp_common.lua:275: SQL API error
FATAL: mysql_stmt_prepare() failed
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 cleanup
The old leader is test-xenondb-0
, the old follower is test-xenondb-1
and test-xenondb-2
.
executes the script in leader node.
sysbench --db-driver=mysql --mysql-user=qingcloud --mysql-password=Qing@123 --mysql-host=<host> --mysql-port=<port> --mysql-db=qingcloud --range_size=100 --table_size=100000 --tables=4 --threads=128 --events=0 --time=3600 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run
delete the leader pod.
kubectl delete pod <leader-pod-name>
Then the leader pod automatically restarts and re-elects the leader after the restart is complete. but the label of the new leader pod is not updated.
kind: Pod
apiVersion: v1
metadata:
name: test-xenondb-2
generateName: test-xenondb-
namespace: xenondb-deploy
labels:
app: test-xenondb
controller-revision-hash: test-xenondb-5c8949f646
release: test
role: follower
statefulset.kubernetes.io/pod-name: test-xenondb-2
The IO status of the new leader node is false
.
/ $ xenoncli cluster status
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| ID | Raft | Mysqld | Monitor | Backup | Mysql | IO/SQL_RUNNING | MyLeader |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-2.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@LEADER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [false/true] | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
| | | | | LastError: | | | |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-0.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
| | | | | LastError: | | | |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-1.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
| | | | | LastError: | | | |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
/ $ xenoncli cluster gtid
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| ID | Raft | Mysql | Executed_GTID_Set | Retrieved_GTID_Set |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-0.test-xenondb.xenondb-deploy:8801 | FOLLOWER | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443, | |
| | | | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432, | |
| | | | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170, | |
| | | | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277 | |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-1.test-xenondb.xenondb-deploy:8801 | FOLLOWER | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-341721, | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:255336-348183 |
| | | | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432, | |
| | | | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170, | |
| | | | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159405 | |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-2.test-xenondb.xenondb-deploy:8801 | LEADER | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-348183, | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443, |
| | | | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432, | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:141788-165432, |
| | | | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170, | bc25fa13-b14a-4946-ac41-e4b2cec149e1:84271-133170, |
| | | | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277 | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277 |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: test-xenondb-0.test-xenondb.xenondb-deploy
Master_User: qc_repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000006
Read_Master_Log_Pos: 173547413
Relay_Log_File: mysql-relay-bin.000004
Relay_Log_Pos: 838635253
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 978983899
Relay_Log_Space: 2180829427
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 1148
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 2003
Last_IO_Error: error reconnecting to master '[email protected]:3306' - retry-time: 60 retries: 1
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 100
Master_UUID: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: System lock
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp: 210425 10:05:06
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443,
644c2d77-6b63-4843-a1ca-6d8bb45e448c:141788-165432,
bc25fa13-b14a-4946-ac41-e4b2cec149e1:84271-133170,
f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277
Executed_Gtid_Set: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-288523,
644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432,
bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170,
f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
The mysql log of test-xenondb-2
.
2021-04-25T10:05:06.019869+08:00 157 [Note] Slave for channel '': received end packet from server due to dump thread being killed on master. Dump threads are killed for example during master shutdown, explicitly by a user, or when the master receives a binlog send request from a duplicate server UUID : Error
2021-04-25T10:05:06.020011+08:00 157 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000006' at position 173547413 for channel ''
2021-04-25T10:05:06.020051+08:00 157 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2021-04-25T10:05:06.021838+08:00 157 [ERROR] Slave I/O for channel '': error reconnecting to master '[email protected]:3306' - retry-time: 60 retries: 1, Error_code: 2003
2021-04-25T10:05:06.974288+08:00 157 [Note] Slave I/O thread killed during or after a reconnect done to recover from failed read
2021-04-25T10:05:06.974326+08:00 157 [Note] Slave I/O thread exiting for channel '', read up to log 'mysql-bin.000006', position 173547413
2021-04-25T10:05:12.311606+08:00 58170 [Note] Start binlog_dump to master_thread_id(58170) slave_server(101), pos(, 4)
2021-04-25T10:05:12.311651+08:00 58170 [Note] Start semi-sync binlog_dump to slave (server_id: 101), pos(, 4)
2021-04-25T10:05:16.989469+08:00 32 [Note] Semi-sync replication initialized for transactions.
2021-04-25T10:05:16.989546+08:00 32 [Note] Semi-sync replication enabled on the master.
2021-04-25T10:05:16.989841+08:00 0 [Note] Starting ack receiver thread
2021-04-25T10:06:10.012709+08:00 58194 [Note] Start binlog_dump to master_thread_id(58194) slave_server(100), pos(, 4)
2021-04-25T10:06:10.012747+08:00 58194 [Note] Start semi-sync binlog_dump to slave (server_id: 100), pos(, 4)
We use publishNotReadyAddresses
to be able to access pods even if the pod is not ready.
When set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service.
Before:
/ $ xenoncli cluster status
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| ID | Raft | Mysqld | Monitor | Backup | Mysql | IO/SQL_RUNNING | MyLeader |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-1.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@LEADER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READWRITE] | [true/true] | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-0.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-2.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
After:
/ $ xenoncli cluster status
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| ID | Raft | Mysqld | Monitor | Backup | Mysql | IO/SQL_RUNNING | MyLeader |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-1.test-krypton.default:8801 | [ViewID:3 EpochID:2]@LEADER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READWRITE] | [true/true] | test-krypton-1.test-krypton.default:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-0.test-krypton.default:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-krypton-1.test-krypton.default:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-2.test-krypton.default:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF | state:[NONE] | [ALIVE] [READONLY] | [true/true] | test-krypton-1.test-krypton.default:8801 |
| | | | | LastError: | | | |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
部署radondb在k8s,默认root无法登录,怎么修改
Is your feature request related to a problem? Please describe.
It is not recommended to log in directly with the root account
use the root password to log into the root account should uncomment the mysqlRootPassword
and set allowEmptyRootPassword
to false
Describe the solution you'd like
Execute the following instructions in the xenon container of the leader node to create a super user:
xenoncli mysql createsuperuser <user> <host> <password> <YES / NO>
The last parameter represents whether to enable ssl.
Describe alternatives you've considered
Added a document introducing xenoncli commands
Additional context
none
CMD about Kubernetes
The doc deploy_radondb-mysql_on_kubesphere is too long and the readability is not good.
It is recommended to split the document according to the different deployment methods, like this:
The default network access method of the service is None, Should add the opening method:
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
Make a backup like this:
add feild BackupToNFS specify the nfs server in yaml file, then
kubectl apply -f mysql_v1alpha1_backup.yaml
Can backup to Nfs server,
add feild RestoreFromNFS specify the nfs server in yaml file, then
it can restore cluster by:
kubectl apply -f mysql_v1alpha1_cluster.yaml
Just add
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
Fix Image to radondb/
Describe the solution you'd like
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
Update the cluster api, add status api, add sidecar container, update the Dockerfile Makefile...
Describe the solution you'd like
Describe alternatives you've considered
Additional context
# helm lint ./charts/
==> Linting ./charts/
[ERROR] Chart.yaml: version 'beta0.1.0' is not a valid SemVer
[INFO] Chart.yaml: icon is recommended
[ERROR] Chart.yaml: chart type is not valid in apiVersion 'v1'. It is valid in apiVersion 'v2'
Error: 1 chart(s) linted, 1 chart(s) failed
Add the method to deploy radondb-mysql using repo at deploy_radondb-mysql_on_kubesphere.md
Refer to Apache APISIX installing
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
testify + mock framework(monkey/gomock/gostub)
Describe alternatives you've considered
Additional context
Describe the problem
To Reproduce
Expected behavior
Environment:
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
To process some operators as a init container.
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
xenon-0,xenon-1,xenon-2
Must be create not update.
xenoncli raft trytoleader
for xenon-1:
xenoncli cluster add xenon-0
for xenon-0:
xenoncli cluster add xenon-1
for xenon-2:
xenoncli cluster add xenon-0,xenon-1
for xenon-0:
xenoncli cluster add xenon-2
for xenon-1:
xenoncli cluster add xenon-2
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
radondb mysql (helm) pull images from xenondb now.
https://github.com/radondb/radondb-mysql-kubernetes/blob/main/charts/helm/values.yaml
Describe the solution you'd like
it should pull from radondb. https://hub.docker.com/u/radondb
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
TODO:
Describe the solution you'd like
reference: deploy_radondb-mysql_on_kubernetes
Describe alternatives you've considered
Additional context
Is your feature request related to a problem? Please describe.
primitive: support nodes expand or reduce disk capacity (Manual)
advanced: support automatic expansion capacity
Describe the solution you'd like
Describe alternatives you've considered
Additional context
- [Kubernetes 平台部署](docs/Kubernetes/deploy_xenondb_on_kubernetes.md)
- [KubeSphere 应用商店部署](docs/KubeSphere/deploy_xenondb_on_kubesphere.md)
在 Kubenetes 在 Kubernetes 上部署 XenonDB 集群
在 Kubesphere 上部署 XenonDB 集群
Modify the description of the two links to be consistent with the title of the original text
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
Upgrade the follower nodes first, and then upgrade the leader node last.
For a two-node cluster, we need to adjust the parameter rpl_semi_sync_master_timeout before upgrading from the node. Otherwise, the leader node will hang due to semi-sync.
Describe alternatives you've considered
Additional context
Reference: Qingcloud MySQL Plus
mysql> show variables like '%read_only%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_read_only | OFF |
| read_only | ON |
| super_read_only | OFF |
| transaction_read_only | OFF |
| tx_read_only | OFF |
+-----------------------+-------+
Describe the problem
从云端备份的数据库副本恢复出pod之后,发现
原因是:每次检查$DATADIR/mysql 是否存在,不存在才进行初始化mysql库与初始化sql的操作,建议将docker-entrypoint-initdb.d 处理逻辑移出$DATADIR/mysql的条件判断中。
if [ ! -d "$DATADIR/mysql" ]; then
# other code....
mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" --password="" )
# other code....
echo
ls /docker-entrypoint-initdb.d/ > /dev/null
for f in /docker-entrypoint-initdb.d/*; do
process_init_file "$f" "${mysql[@]}"
done
if ! kill -s TERM "$pid" || ! wait "$pid"; then
echo >&2 'MySQL init process failed.'
exit 1
fi
sed -i '/server-id/d' /etc/mysql/my.cnf
chown -R mysql:mysql "$DATADIR"
fi
rm -f /var/log/mysql/error.log
rm -f /var/lib/mysql/auto.cnf
uuid=$(cat /proc/sys/kernel/random/uuid)
printf '[auto]\nserver_uuid=%s' $uuid > /var/lib/mysql/auto.cnf
To Reproduce
Expected behavior
Environment:
我尝试了按照原生mysql镜像的方法,修改了chart中statefulset.yaml文件,把configmap挂载到了mysql容器的/docker-entrypoint-initdb.d目录,但是没有生效。于是我对比了一下mysql和xenondb/percona:5.7.33的docker-entrypoint.sh,发现相关逻辑有一些差异:
mysql:5.7:
https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh
# usage: docker_process_init_files [file [file [...]]]
# ie: docker_process_init_files /always-initdb.d/*
# process initializer files, based on file extensions
docker_process_init_files() {
# mysql here for backwards compatibility "${mysql[@]}"
mysql=( docker_process_sql )
echo
local f
for f; do
case "$f" in
*.sh)
# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
# https://github.com/docker-library/postgres/pull/452
if [ -x "$f" ]; then
mysql_note "$0: running $f"
"$f"
else
mysql_note "$0: sourcing $f"
. "$f"
fi
;;
*.sql) mysql_note "$0: running $f"; docker_process_sql < "$f"; echo ;;
*.sql.gz) mysql_note "$0: running $f"; gunzip -c "$f" | docker_process_sql; echo ;;
*.sql.xz) mysql_note "$0: running $f"; xzcat "$f" | docker_process_sql; echo ;;
*) mysql_warn "$0: ignoring $f" ;;
esac
echo
done
}
xenondb/percona:5.7.33:
https://github.com/radondb/radondb-mysql-kubernetes/blob/main/charts/helm/dockerfiles/mysql/mysql-entry.sh
# usage: process_init_file FILENAME MYSQLCOMMAND...
# ie: process_init_file foo.sh mysql -uroot
# (process a single initializer file, based on its extension. we define this
# function here, so that initializer scripts (*.sh) can use the same logic,
# potentially recursively, or override the logic used in subsequent calls)
process_init_file() {
local f="$1"; shift
local mysql=( "$@" )
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
}
我想请教一下这些改动的目的是什么,以及radondb-mysql需要怎么初始化sql文件?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.