Coder Social home page Coder Social logo

Comments (39)

Ryan-Git avatar Ryan-Git commented on August 18, 2024

从监控看问题是读源端慢。源端没有主键只有联合唯一键?
这样(联合唯一键批量)读是会比较慢的,跟同步链路本身关系不大。可以尝试调低批量调高并发,具体还是要看下源端的情况。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

从监控看问题是读源端慢。源端没有主键只有联合唯一键? 这样(联合唯一键批量)读是会比较慢的,跟同步链路本身关系不大。可以尝试调低批量调高并发,具体还是要看下源端的情况。

是有主键的, 简化后的表结构如下

CREATE TABLE sqldataviewtest. test_aaa (
id bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT 'id',
cc_id varchar(32) NOT NULL DEFAULT '' ,
domain_id varchar(64) NOT NULL DEFAULT '' COMMENT '',
app_type int(11) unsigned NOT NULL DEFAULT '0' COMMENT '',
phone varchar(32) NOT NULL DEFAULT '' COMMENT '手机号',
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
deleted_at timestamp NULL DEFAULT NULL COMMENT '删除时间',
is_deleted bigint(20) NOT NULL DEFAULT '0' COMMENT '是否已删除:0-未删除,大于0-已删除',
PRIMARY KEY (id),
UNIQUE KEY uniq_phone_group_domain_type (cc_id,domain_id,phone,is_deleted,app_type) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='测试表'

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

从监控看问题是读源端慢。源端没有主键只有联合唯一键? 这样(联合唯一键批量)读是会比较慢的,跟同步链路本身关系不大。可以尝试调低批量调高并发,具体还是要看下源端的情况。

这里调低批量调高并发 是指调 scheduler 调大nr-worker 及调小 queue-size吗?

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

那就比较奇怪了,确认下源端为啥执行慢?预期执行的 sql 应该是select * from xx where id > xxx limit 10000,不应该花 4~5 秒。

不是,是调低input.config.table-scan-batch ,调高input.config.batch-per-second-limit

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

那就比较奇怪了,确认下源端为啥执行慢?预期执行的 sql 应该是select * from xx where id > xxx limit 10000,不应该花 4~5 秒。

不是,是调低input.config.table-scan-batch ,调高input.config.batch-per-second-limit

可能是我表述有误, 是在增量的模式下,mysql端进行大批量的更新操作 如 insert into xxx select * from xxxx, 在同步到tidb及kafka时 比较慢, 不是全量的模式下

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

那就比较奇怪了,确认下源端为啥执行慢?预期执行的 sql 应该是select * from xx where id > xxx limit 10000,不应该花 4~5 秒。
不是,是调低input.config.table-scan-batch ,调高input.config.batch-per-second-limit

不好意思。我表述有误, 是在增量的模式下,mysql端进行大批量的更新操作 如 insert into xxx select * from xxxx, 在同步到tidb及kafka时 比较慢, 不是全量的模式下

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

哦。。。那就不是同步的问题了。类似mysql自身的主从延迟,大事务是在commit时候才刷binlog的,这期间gravity也没法同步。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

看着不像是binlog产生慢导致的,
1) 有不带唯一索引的表 也有类似的大事务更新操作, 但同步会很快,
2)上面的表结构,我在用datax导入的测试数据的时候,他那个是微批 批量写入的机制,datax导入完了。 这个时候观察gravity 采集的同步速率也不快

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

1、可以看下具体场景
2、datax 我记得是扫表的,不是走的 binlog,自然没有这个问题。有条件可以跟 canal 或者 mysql 自身的主从比较一下,或者让 gravity 直接输出 binlog 看看。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

1、可以看下具体场景 2、datax 我记得是扫表的,不是走的 binlog,自然没有这个问题。有条件可以跟 canal 或者 mysql 自身的主从比较一下,或者让 gravity 直接输出 binlog 看看。

1、我这边做了一个小测试,不能完全复现,但是有唯一索引 会比 没有唯一索引的慢一些
2、gravity的输出端output.type 直接改成stdout,同步速率会很快

测试案例

--  建内存表 

CREATE TABLE `vote_record_memory` (
`id` INT (11) NOT NULL AUTO_INCREMENT,
`user_id` VARCHAR (64) NOT NULL,
`vote_id` INT (11) NOT NULL,
`group_id` INT (11) NOT NULL,
`create_time` datetime NOT NULL,
`phone` varchar(32) NOT NULL DEFAULT '' COMMENT '手机号',
`is_deleted` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `index_id` (`user_id`) USING HASH
) ENGINE = MEMORY AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8;


-- 普通索引表
CREATE TABLE `vote_record` (
`id` INT (11) NOT NULL AUTO_INCREMENT,
`user_id` VARCHAR (64) NOT NULL,
`vote_id` INT (11) NOT NULL,
`group_id` INT (11) NOT NULL,
`create_time` datetime NOT NULL,
`phone` varchar(32) NOT NULL DEFAULT '' COMMENT '手机号',
`is_deleted` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `index_user_id` (`user_id`) USING HASH
) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8;



-- 唯一索引表
CREATE TABLE `vote_record_2` (
`id` INT (11) NOT NULL AUTO_INCREMENT,
`user_id` VARCHAR (64) NOT NULL,
`vote_id` INT (11) NOT NULL,
`group_id` INT (11) NOT NULL,
`create_time` datetime NOT NULL,
`phone` varchar(32) NOT NULL DEFAULT '' COMMENT '手机号',
`is_deleted` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `index_user_id` (`user_id`,group_id,phone,is_deleted) USING HASH
) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8;



--  创建FUNCTION  rand_string
CREATE FUNCTION `rand_string`(n INT) RETURNS varchar(255) CHARSET latin1


BEGIN


DECLARE chars_str varchar(100) DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456781212129';


DECLARE return_str varchar(255) DEFAULT '' ;


DECLARE i INT DEFAULT 0;


WHILE i < n DO


SET return_str = concat(return_str,substring(chars_str , FLOOR(1 + RAND()*62 ),1));


SET i = i +1;


END WHILE;


RETURN return_str;


END


--  创建PROCEDURE 
CREATE PROCEDURE `add_vote_memory_1`(IN n int)


BEGIN


DECLARE i INT DEFAULT 1;


WHILE (i <= n ) DO


INSERT into vote_record_memory (user_id,vote_id,group_id,create_time,phone,is_deleted)  VALUEs (rand_string(20),FLOOR(RAND() * 1000),FLOOR(RAND() * 100) ,now(),rand_string(50),FLOOR(RAND() * 1000));


set i=i+1;


END WHILE;
END


 生产数据 500000
CALL add_vote_memory_1(500000);
select count(1) from vote_record_memory


TRUNCATE table vote_record;
同步数据到vote_record 
INSERT into vote_record SELECT  * from vote_record_memory limit 100000;

插入数据到vote_record_2 
REPLACE into vote_record_2 SELECT * from vote_record_memory limit 500000;

插入数据到vote_record 
REPLACE into vote_record SELECT * from vote_record_memory limit 500000;

不带唯一索引的
图片

带唯一索引
图片

具体配置

{
  "command": [
    "/gravity",
    "-config=/etc/gravity/config.json"
  ],
  "paused": false,
  "lastUpdate": "2022-08-19T04:48:36Z",
  "config": {
    "name": "new-tidb-test-2",
    "version": "1.0",
    "input": {
      "type": "mysql",
      "mode": "stream",
      "config": {
        "batch-per-second-limit": 400,
        "nr-scanner": 400,
        "source": {
          "host": "xxxx",
          "password": "xxxx",
          "port": 3306,
          "username": "xxxx"
        },
        "table-scan-batch": 2000
      }
    },
    "filters": [
      {
        "type": "reject",
        "config": {
          "match-schema": "test*",
          "match-table": [
            "*"
          ]
        }
      }
    ],
    "output": {
      "type": "mysql",
      "config": {
        "enable-ddl": false,
        "routes": [
          {
            "match-schema": "*",
            "match-table": [
              "vote_record*"
            ],
            "target-schema": "sqldataviewtest",
            "target-table": "vote_record"
          }
        ],
        "target": {
          "host": "xxxx",
          "password": "xxxx",
          "port": 4000,
          "username": "xxxx"
        }
      }
    },
    "scheduler": {
      "type": "batch-table-scheduler",
      "config": {
        "batch-size": 60,
        "healthy-threshold": 3600,
        "nr-worker": 600,
        "queue-size": 500,
        "sliding-window-size": 500
      }
    }
  },
  "configHash": "855f55989c",
  "image": "xxx/gravity:v0.9.71"
}

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

唯一索引需要额外判断冲突,是会慢一点。但慢这么多确实不符合预期,我看下。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

唯一索引需要额外判断冲突,是会慢一点。但慢这么多确实不符合预期,我看下。

您好,请问有进展吗?

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

我在MySQL上试了一下上面提供的例子,几乎没有差别。你也试试?

这样的话看下目标侧为什么慢?前几年唯一索引TiDB似乎有锁冲突比较严重的问题,现在不太清楚情况。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

我在MySQL上试了一下上面提供的例子,几乎没有差别。你也试试?

这样的话看下目标侧为什么慢?前几年唯一索引TiDB似乎有锁冲突比较严重的问题,现在不太清楚情况。

这个目标端 输出的话 如果是kafka也挺慢的, 这边对于tidb端的唯一索引我也删除掉了。
我们这边的源端是 阿里云的rds 会跟这个有关系吗?

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

看起来关系不大,截图的监控是阻塞在目标端。可以直接看output延迟相关的指标。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

您好,这个是所有的指标,看下是否有参考价值

图片
图片
图片
图片
图片

另外我这边提供了一个测试数据及测试表结构,您看下能不能复现

表结构
CREATE TABLE member_team_customer_3 (
id bigint(20) unsigned NOT NULL,
team_id bigint(20) NOT NULL DEFAULT '0' COMMENT '',
customer_id varchar(64) NOT NULL DEFAULT '' COMMENT 'id',
batch_flag int(11) DEFAULT '0' COMMENT '批次标识',
is_deleted bigint(20) NOT NULL DEFAULT '0' COMMENT '是否已删除:0-未删除,大于0-已删除',
remark varchar(255) DEFAULT NULL COMMENT '备注',
created_at timestamp NULL DEFAULT CURRENT_TIMESTAMP,
updated_at timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at timestamp NULL DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY uniq_team_deleted_customer (team_id,customer_id),
KEY idx_batch_flag (team_id,batch_flag),
KEY idx_customer_id_delete (customer_id,is_deleted) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=‘测试表’

CREATE TABLE member_team_customer_4 (
id bigint(20) unsigned NOT NULL,
team_id bigint(20) NOT NULL DEFAULT '0' COMMENT '',
customer_id varchar(64) NOT NULL DEFAULT '' COMMENT 'id',
batch_flag int(11) DEFAULT '0' COMMENT '批次标识',
is_deleted bigint(20) NOT NULL DEFAULT '0' COMMENT '是否已删除:0-未删除,大于0-已删除',
remark varchar(255) DEFAULT NULL COMMENT '备注',
created_at timestamp NULL DEFAULT CURRENT_TIMESTAMP,
updated_at timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at timestamp NULL DEFAULT NULL,
PRIMARY KEY (id),
KEY idx_batch_flag (team_id,batch_flag),
KEY idx_customer_id_delete (customer_id,is_deleted) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='客户分组中间表’

测试数据 文件这里面放不了,我放到githup的地址上 需要您这边下载下:https://github.com/zhanjianS/Test/blob/master/test.csv

目标端为tidb,同一配置,每次测试前 清空目标端数据。 同一份测试数据,
区别: member_team_customer_4几member_team_customer_4表 中有无 uniq_team_deleted_customer表

输出端配置文件

"output": {
"type": "mysql",
"config": {
"enable-ddl": false,
"routes": [
{
"match-schema": "xx",
"match-table": [
"member_team_customer*"
],
"target-schema": "xx",
"target-table": "member_team_customer"
}
],
"target": {
"host": "xx",
"password": "xx",
"port": xx,
"username": "xx"
}
}
},
"scheduler": {
"type": "batch-table-scheduler",
"config": {
"batch-size": 60,
"healthy-threshold": 3600,
"nr-worker": 600,
"queue-size": 500,
"sliding-window-size": 500
}
}
},

测试1:
源端 向唯一索引同步的表同步, 后 查询目标端同步 ,基本就几秒中 就完成
TRUNCATE table member_team_customer_4;
REPLACE into member_team_customer_4 select * from member_team_customer_3

测试语句2:
源端 向唯一索引同步的表同步, 后 查询目标端同步 ,是按200多左右增加
TRUNCATE table member_team_customer_3;
REPLACE into member_team_customer_3 select * from member_team_customer_4

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

看起来还是写得慢了,输入队列都满了。tidb 侧的监控看延迟是什么情况呢?

加大 output 的连接数试试?默认是按 mysql 给的 20 连接。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

加大了参数
"max-idle": 200,
"max-open": 200,
还是同样的情况

不像是tidb的原因, 同样的配置。两条测试语句的 唯一的区别就是建表带不带唯一索引,其他的条件都一样,但是实时同步的速率不一样

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

您好,这个问题有看吗 ? 另外 在同样数据测试情况下,输出端改成kafka的话,也是带唯一索引的同步速率较慢。

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

我在MySQL上试了一下上面提供的例子,几乎没有差别。你也试试?

用 MySQL 呢?

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

我测了 也一样。带唯一索引的同步速率较慢。
图片

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

抓个火焰图吧。。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

抓个火焰图吧。。

请问这个具体可以怎么操作拿到这个,我们是用K8s部署的

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

抓个火焰图吧。。

请问这个具体可以怎么操作拿到这个,我们是用K8s部署的

pprof 开的。
go tool pprof http://localhost:8080/debug/pprof/profile?seconds=30

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

下面是用go-torch 取的两个svg的文件

写kafka
kafka_profile-local

写tidb
tidb-test-2-profile-local

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

这 svg 看不见完整的栈名。。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

这 svg 看不见完整的栈名。。

是指在图片上看不出来吗。还是我采集错了 我上传的是文件,显示的是图片,下载下来可以吗

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

下载确实可以,我再看下。

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

联合唯一索引的元数据处理有点问题,详见 pr

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

最新代码 利用k8s部署 报错如下

`github.com/siddontang/go-mysql/replication.(*BinlogSyncer).onStream(0xc0001cd8c0, 0xc0007f0180)
/root/go/pkg/mod/github.com/siddontang/[email protected]/replication/binlogsyncer.go:637 +0xbb fp=0xc000917fc0 sp=0xc000917e90 pc=0x1645cfb
github.com/siddontang/go-mysql/replication.(*BinlogSyncer).startDumpStream.func1()
/root/go/pkg/mod/github.com/siddontang/[email protected]/replication/binlogsyncer.go:351 +0x2a fp=0xc000917fe0 sp=0xc000917fc0 pc=0x164442a
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000917fe8 sp=0xc000917fe0 pc=0xc19801
created by github.com/siddontang/go-mysql/replication.(*BinlogSyncer).startDumpStream
/root/go/pkg/mod/github.com/siddontang/[email protected]/replication/binlogsyncer.go:351 +0x10d

goroutine 384 [select, locked to thread]:
runtime.gopark(0xc0010967a8?, 0x2?, 0x10?, 0x80?, 0xc0010967a4?)
/usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc001096618 sp=0xc0010965f8 pc=0xbe7156
runtime.selectgo(0xc0010967a8, 0xc0010967a0, 0x0?, 0x0, 0x0?, 0x1)
/usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc001096758 sp=0xc001096618 pc=0xbf743c
runtime.ensureSigM.func1()
/usr/lib/golang/src/runtime/signal_unix.go:991 +0x1b0 fp=0xc0010967e0 sp=0xc001096758 pc=0xbfb8b0
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0010967e8 sp=0xc0010967e0 pc=0xc19801
created by runtime.ensureSigM
/usr/lib/golang/src/runtime/signal_unix.go:974 +0xbd

goroutine 802 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
/usr/lib/golang/src/runtime/lock_futex.go:236 +0x34 fp=0xc001172fa0 sp=0xc001172f68 pc=0xbb6cd4
os/signal.signal_recv()
/usr/lib/golang/src/runtime/sigqueue.go:152 +0x2f fp=0xc001172fc0 sp=0xc001172fa0 pc=0xc15daf
os/signal.loop()
/usr/lib/golang/src/os/signal/signal_unix.go:23 +0x19 fp=0xc001172fe0 sp=0xc001172fc0 pc=0xf51dd9
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc001172fe8 sp=0xc001172fe0 pc=0xc19801
created by os/signal.Notify.func1.1
/usr/lib/golang/src/os/signal/signal.go:151 +0x2a

goroutine 818 [chan receive]:
runtime.gopark(0xc000993e68?, 0x11c5cd7?, 0x0?, 0xc3?, 0xc10176c7a4c26545?)
/usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000993e38 sp=0xc000993e18 pc=0xbe7156
runtime.chanrecv(0xc0004ffd40, 0xc000993f70, 0x1)
/usr/lib/golang/src/runtime/chan.go:583 +0x49b fp=0xc000993ec8 sp=0xc000993e38 pc=0xbb14bb
runtime.chanrecv2(0x23f20c0?, 0xc0014e9f08?)
/usr/lib/golang/src/runtime/chan.go:447 +0x18 fp=0xc000993ef0 sp=0xc000993ec8 pc=0xbb0ff8
github.com/moiot/gravity/pkg/sliding_window.(*staticSlidingWindow).start(0xc000844000)
/root/gravity/pkg/sliding_window/static_sliding_window.go:158 +0x3d4 fp=0xc000993fc8 sp=0xc000993ef0 pc=0x1770674
github.com/moiot/gravity/pkg/sliding_window.NewStaticSlidingWindow.func1()
/root/gravity/pkg/sliding_window/static_sliding_window.go:219 +0x26 fp=0xc000993fe0 sp=0xc000993fc8 pc=0x1771186
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000993fe8 sp=0xc000993fe0 pc=0xc19801
created by github.com/moiot/gravity/pkg/sliding_window.NewStaticSlidingWindow
/root/gravity/pkg/sliding_window/static_sliding_window.go:219 +0x19a

goroutine 819 [select]:
runtime.gopark(0xc001b5ef30?, 0x2?, 0x2e?, 0x58?, 0xc001b5eeec?)
/usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc001b5ed50 sp=0xc001b5ed30 pc=0xbe7156
runtime.selectgo(0xc001b5ef30, 0xc001b5eee8, 0x0?, 0x0, 0x3?, 0x1)
/usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc001b5ee90 sp=0xc001b5ed50 pc=0xbf743c
github.com/moiot/gravity/pkg/sliding_window.(*staticSlidingWindow).reportMetrics(0xc000844000)
/root/gravity/pkg/sliding_window/static_sliding_window.go:172 +0x114 fp=0xc001b5efc8 sp=0xc001b5ee90 pc=0x1770894
github.com/moiot/gravity/pkg/sliding_window.NewStaticSlidingWindow.func2()
/root/gravity/pkg/sliding_window/static_sliding_window.go:220 +0x26 fp=0xc001b5efe0 sp=0xc001b5efc8 pc=0x1771126
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc001b5efe8 sp=0xc001b5efe0 pc=0xc19801
created by github.com/moiot/gravity/pkg/sliding_window.NewStaticSlidingWindow
/root/gravity/pkg/sliding_window/static_sliding_window.go:220 +0x1d7

goroutine 835 [select]:
runtime.gopark(0xc0008eef80?, 0x2?, 0xf8?, 0xed?, 0xc0008eef40?)
/usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc0008eedc0 sp=0xc0008eeda0 pc=0xbe7156
runtime.selectgo(0xc0008eef80, 0xc0008eef3c, 0xbb025d?, 0x0, 0x23dd950?, 0x1)
/usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc0008eef00 sp=0xc0008eedc0 pc=0xbf743c
github.com/go-sql-driver/mysql.(*mysqlConn).startWatcher.func1()
/root/go/pkg/mod/github.com/go-sql-driver/[email protected]/connection.go:621 +0xaa fp=0xc0008eefe0 sp=0xc0008eef00 pc=0x160f68a
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0008eefe8 sp=0xc0008eefe0 pc=0xc19801
created by github.com/go-sql-driver/mysql.(*mysqlConn).startWatcher
/root/go/pkg/mod/github.com/go-sql-driver/[email protected]/connection.go:618 +0x10a

goroutine 793 [IO wait]:
runtime.gopark(0xc0015ebf20?, 0xb?, 0x0?, 0x0?, 0x11?)
/usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc0005acde8 sp=0xc0005acdc8 pc=0xbe7156
runtime.netpollblock(0xc70d05?, 0x11c32e5?, 0x0?)
/usr/lib/golang/src/runtime/netpoll.go:526 +0xf7 fp=0xc0005ace20 sp=0xc0005acde8 pc=0xbdf877
internal/poll.runtime_pollWait(0x7f0bea013bb8, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:305 +0x89 fp=0xc0005ace40 sp=0xc0005ace20 pc=0xc13449
internal/poll.(*pollDesc).wait(0xc000122980?, 0xc0015f8191?, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc0005ace68 sp=0xc0005ace40 pc=0xc8d8f2
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000122980, {0xc0015f8191, 0x1, 0x1})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc0005acee8 sp=0xc0005ace68 pc=0xc8edda
net.(*netFD).Read(0xc000122980, {0xc0015f8191?, 0x1c9d1c0?, 0xc000880b28?})
/usr/lib/golang/src/net/fd_posix.go:55 +0x29 fp=0xc0005acf30 sp=0xc0005acee8 pc=0xdad5a9
net.(*conn).Read(0xc000119758, {0xc0015f8191?, 0x0?, 0x0?})
/usr/lib/golang/src/net/net.go:183 +0x45 fp=0xc0005acf78 sp=0xc0005acf30 pc=0xdc1925
net/http.(*connReader).backgroundRead(0xc0015f8180)
/usr/lib/golang/src/net/http/server.go:678 +0x3f fp=0xc0005acfc8 sp=0xc0005acf78 pc=0xec89df
net/http.(*connReader).startBackgroundRead.func2()
/usr/lib/golang/src/net/http/server.go:674 +0x26 fp=0xc0005acfe0 sp=0xc0005acfc8 pc=0xec8906
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0005acfe8 sp=0xc0005acfe0 pc=0xc19801
created by net/http.(*connReader).startBackgroundRead
/usr/lib/golang/src/net/http/server.go:674 +0xca`

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

日志是啥?看着跟改动没关系,订阅 binlog 就出错了。是不是已经被删了。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

日志没有删除, 我们用gravity-operator部署的

拉取了最新的代码
make build-linux
docker build -t gravity:v0.9.72 -f Dockerfile.gravity . --no-cache

然后替换graivty 容器的版本为gravity:v0.9.72 ,开始报错,容器一致在重启

上述的日志是容器里面的日志

采集的日志是这个错误,基本一致在重复
image

把版本换成之前的 v0.9.71 是可以的,就没有报错

另外 单独运行容器是可以的
docker run -v ${PWD}/config.toml:/etc/gravity/config.toml -d --net=host gravity:v0.9.72

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

看起来还是连 mysql 报错,标准输出还有其他错误信息吗?
这次没有改 go-mysql 的版本,奇怪

image

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

这是拿到的全部日志,其中ip跟密码 我替换了下

gravity.log

golang/go#53462 跟这个问题像不像?

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

这日志跟前面贴的完全不一样啊。。。
看起来是 go 版本问题,用哪个版本做的镜像?

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

之前日志是采集的,估计只有一部分

编译机器的 go的版本是1.19的, 我换成了1.17.2 , 重新编译后 可以了

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

好的。
那看起来确实是上面贴的那个问题,reflection2 要升版本。晚点我升一下。
单独跑没问题是因为没调健康检查接口,集群版 k8s 会调,实际是一样的。

from gravity.

zhanjianS avatar zhanjianS commented on August 18, 2024

我这边有一个问题,如果mysql的字段包含 DEFAULT CURRENT_TIMESTAMP,
那么下游同步时,这个字段时会取默认值的当前时间,而不是原数据的时间,
修改下游表的默认值为null 时,下游同步的数据就会为null

这个问题之前已经出现过一次 修复过了 #335

这次是不是本次提交的 这行代码导致的. 直接else if ,没有走第一个if条件判断 导致存在DEFAULT CURRENT_TIMESTAMP 被忽略了
图片

from gravity.

Ryan-Git avatar Ryan-Git commented on August 18, 2024

修一下,应该新开个 issue 的,直接提 PR 也行

from gravity.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.