Coder Social home page Coder Social logo

clickhouse-mysql-data-reader's People

Contributors

aquarius-wing avatar bookin avatar rootzltd avatar servis avatar sunsingerus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clickhouse-mysql-data-reader's Issues

First run clickhouse-mysql command got error

[root@ftp ~]# clickhouse-mysql
Traceback (most recent call last):
File "/usr/bin/clickhouse-mysql", line 5, in
from pkg_resources import load_entry_point
ModuleNotFoundError: No module named 'pkg_resources'

I use the RPM Installation.

document seems obsolete

doc says

clickhouse-mysql
--src-host=127.0.0.1
--src-user=reader
--src-password=Qwerty1#
--create-table-sql-template
--with-create-database
--src-only-table=airline.ontime > create_clickhouse_table_template.sql

but when I run

clickhouse-mysql: error: unrecognized arguments: --src-only-table=airline.ontime

Incorrect data in clickhouse after migration

Hello,

use clickhouse-mysql for migrate data from mysql to clickhouse. After migration some fields have incorrect value in clickhouse, if in mysql they have utt8 with russian symbols. But new data replicated correctly! If the same data migrate with flag --migrate-table, i see question marks instead russian text.

Step to reproduce.
In mysql:

  1. create database test;
  2. CREATE TABLE test.books ( id int(11) NOT NULL AUTO_INCREMENT, author varchar(255) NOT NULL, book varchar(255) NOT NULL, PRIMARY KEY (id)) ENGINE=InnoDB CHARSET=utf8;
  3. insert into test.books value('', 'Лукьяненко', 'Ночной дозор'),('', 'Стругацкие', 'Понедельник начинается в субботу'),('', 'Lukyanenko', 'Night watch'),('', 'Strugatsky', 'Monday starts on Saturday');

In clickhouse:
4. create database test
5. CREATE TABLE IF NOT EXISTS test.books ( primary_date_field Date default today(), id Int32, author String, book String) ENGINE = MergeTree(primary_date_field, (id), 8192);

Start migration from mysql to clickhouse:
6. clickhouse-mysql --src-host=mysql --src-tables=test.books --src-user=user --src-password=password --dst-host=clickhouse-server --dst-schema=test --dst-table=books --dst-user=user --dst-password=password --migrate-table

  1. select data from mysql:
    image

  2. select data from clickhouse:
    image

clickhouse-mysql real time insert data from MySQL into ClickHouse don't work

hi,team:
There is a situation:
1.Automatically create target table in ClickHouse and migrate existing data from MySQL to ClickHouse is ok:
clickhouse-mysql
--src-server-id=166
--src-wait
--nice-pause=1
--src-host=127.0.0.1
--src-user=clickhousereader
--src-password=12345
--src-tables=airline.ontime
--dst-host=127.0.0.1
--dst-create-table
--migrate-table
--pump-data
--csvpool
2.The script is runing . other session insert a data on mysql.But in ClickHouse can't select the data.
info:
[root@mytestuse clickhouse]# clickhouse-mysql \

--src-server-id=166 \
--src-wait \
--nice-pause=1 \
--src-host=127.0.0.1 \
--src-user=clickhousereader \
--src-password=12345 \
--src-tables=airline.ontime \
--dst-host=127.0.0.1 \
--dst-create-table \
--migrate-table \
--pump-data \
--csvpool

2020-09-10 16:12:26,348/1599725546.348313:INFO:Starting
2020-09-10 16:12:26,348/1599725546.348550:DEBUG:{'app': {'binlog_position_file': None,
'config_file': '/etc/clickhouse-mysql/clickhouse-mysql.conf',
......
2020-09-10 16:12:26,498/1599725546.498071:INFO:PoolWriter()
2020-09-10 16:12:26,498/1599725546.498164:DEBUG:Check events in binlog stream
2020-09-10 16:12:26,498/1599725546.498232:DEBUG:undef
2020-09-10 16:12:26,501/1599725546.501361:WARNING:Got an exception, skip it in blocking mode
2020-09-10 16:12:26,501/1599725546.501451:WARNING:'Connection' object has no attribute 'wfile'
2020-09-10 16:12:27,502/1599725547.502636:DEBUG:Check events in binlog stream
2020-09-10 16:12:27,502/1599725547.502819:DEBUG:Pre-start binlog position: mysqlbin.000032:4086
2020-09-10 16:12:27,506/1599725547.506407:WARNING:Got an exception, skip it in blocking mode
2020-09-10 16:12:27,506/1599725547.506510:WARNING:'Connection' object has no attribute 'wfile'
2020-09-10 16:12:28,507/1599725548.507796:DEBUG:Check events in binlog stream
.....
2020-09-10 16:12:39,565/1599725559.565709:DEBUG:Pre-start binlog position: mysqlbin.000032:4086
2020-09-10 16:12:39,570/1599725559.570876:WARNING:Got an exception, skip it in blocking mode
2020-09-10 16:12:39,571/1599725559.571060:WARNING:'Connection' object has no attribute 'wfile'
^C2020-09-10 16:12:39,715/1599725559.715296:INFO:SIGINT received. Time to exit.
2020-09-10 16:12:39,715/1599725559.715575:INFO:start 1599725546
2020-09-10 16:12:39,715/1599725559.715738:INFO:end 1599725559
2020-09-10 16:12:39,715/1599725559.715876:INFO:len 13
[root@mytestuse clickhouse]#

map argument #1 must support iteration error on replication

Hello, we are evaluating clickhouse-mysql-data-reader replication, and got following error in logs:

2018-12-04 16:54:15,973/1543942455.973548:DEBUG:class:<class 'clickhouse_mysql.writer.processwriter.ProcessWriter'> process()
2018-12-04 16:54:15,974/1543942455.974890:INFO:CSVWriter() csv_file_path=None csv_file_path_prefix=/root/ch_ csv_file_path_suffix_parts=[] csv_keep_file=False dst_schema=sl dst_table=None
2018-12-04 16:54:15,975/1543942455.975752:INFO:CSVWriter() self.path=/root/ch_1543942455.9753294_c9b08f22-72ec-48f8-8b0a-f82e614de73c.csv
2018-12-04 16:54:15,976/1543942455.976219:WARNING:No events to insert. class: <class 'clickhouse_mysql.writer.csvwriter.CSVWriter'>
2018-12-04 16:54:15,976/1543942455.976954:INFO:CHCSWriter() connection_settings={'host': '127.0.0.1', 'port': 9000, 'user': 'default', 'password': ''} dst_schema=sl dst_table=None
2018-12-04 16:54:15,977/1543942455.977393:DEBUG:class:<class 'clickhouse_mysql.writer.chcsvwriter.CHCSVWriter'> insert 1 rows
--this line was added by me trying to figure out what was going on
2018-12-04 16:54:15,977/1543942455.977737:DEBUG:event: <clickhouse_mysql.event.event.Event object at 0x0000000092b8afe0>, attrs: {'schema': 'sl', 'table': None, 'filename': '/root/ch_1543942455.9753294_c9b08f22-72ec-48f8-8b0a-f82e614de73c.csv', 'fieldnames': None}, events: [<clickhouse_mysql.event.event.Event object at 0x0000000092b8afe0>]
Process Process-11:
Traceback (most recent call last):
  File "/root/pypi/pypy3.5-6.0.0-linux_x86_64-portable/lib-python/3/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/root/pypi/pypy3.5-6.0.0-linux_x86_64-portable/lib-python/3/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/root/pypi/pypy3.5-6.0.0-linux_x86_64-portable/site-packages/clickhouse_mysql-0.0.20180321-py3.5.egg/clickhouse_mysql/writer/processwriter.py", line 34, in process
    writer.push()
  File "/root/pypi/pypy3.5-6.0.0-linux_x86_64-portable/site-packages/clickhouse_mysql-0.0.20180321-py3.5.egg/clickhouse_mysql/writer/csvwriter.py", line 145, in push
    self.next_writer_builder.get().insert(event)
  File "/root/pypi/pypy3.5-6.0.0-linux_x86_64-portable/site-packages/clickhouse_mysql-0.0.20180321-py3.5.egg/clickhouse_mysql/writer/chcsvwriter.py", line 63, in insert
    ', '.join(map(lambda column: '`%s`' % column, event.fieldnames)),
TypeError: map argument #1 must support iteration
2018-12-04 16:54:16,530/1543942456.530018:DEBUG:class:<class 'clickhouse_mysql.writer.processwriter.ProcessWriter'> process() done

The thing is, that file /root/ch_1543942455.9753294_c9b08f22-72ec-48f8-8b0a-f82e614de73c.csv doesn't exists at all.

Installing the package using MariaDB

Hello!

When trying to install clickhouse-mysql (RPM), an error occurs:

# yum install -y clickhouse-mysql
...
--> Finished Dependency Resolution
Error: Package: clickhouse-mysql-0.0.20180319-1.noarch (Altinity_clickhouse)
           Requires: mysql-community-devel
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Requires mysql-community-devel package:

Requires: mysql-community-devel

This package is not in the official MariaDB repository:

# yum install mysql-community-devel
...
No package mysql-community-devel available.
Error: Nothing to do

Isn't it enough that the mysql-devel package is installed on the system?

# yum install mysql-devel
...
Package MariaDB-devel-10.3.13-1.el7.centos.x86_64 already installed and latest version
Nothing to do

I had to fix the requirements and build the RPM package manually to install the clickhouse-mysql-data-reader

When you try to install the mysql-devel package for the MySQL Community Server, the mysql-community-devel package will be automatically installed.

https://mariadb.com/kb/en/library/mariadb-vs-mysql-compatibility/

MariaDB 10.3.13
CentOS Linux release 7.6.1810

Thanks!

Configure csvpool_file_path?

Greetings!

I notice that even when csvpool_keep_files is turned off lots of csv files are being created in the / folder. Most probably the insert statements. I suggest the introduction of an option csvpool_file_path, so they could be stored to a specific location on disk or tmpfs disk (RAM). The usage of RAM disk would make even more sense when there is no need to actually store these files. Data already inserted. Why bother? When files needed, some purge mechanism for older already inserted files may be a good idea. Tried setting csvpool_file_path to a folder, which did not change anything. Also tried setting csvpool_file_path_prefix with path - also to no avail. Will be inspecting the code to see why they are stored on the root folder. Not very wise for production, I think. Tried to also change WorkingDirectory in the service file, but that did not work, either. Will try again till I figure it out. Any recommendations?

Best regards!

Repeat pull data

./clickhouse-mysql restart:The data will be repeatedly pulled,clickhouse-mysql not pulling data from the previous point,the src-resume parameter is invalid.

A lots new features coming

hi,team:

I have finished some features on this,but thier just on on private branch, I want to hear more request and advice from anyone.

1. support checkpoint on table initial load.

1. require Mysql and have sorted primary key.

2. support update event.

1. just translate UPDATE event to INSERT, it's friendly for `ReplacingMergeTree`.
2. using `final` ON `SELECT`.

3. planing to support DELETE event.

1. just translate DELETE Event to INSERT(always using soft delete).
2. require dst clickhouse table has `deleted_at` field.
3. and also increment `version key`, currently just support `DateTime` field.

4. Batch Insert

as well as we know, clickhouse-mysql process each binlog one by one using single insert mode, it's very slow. the feature allow caching binlog event and batch insert.

crach for mysql syncing

1. /usr/bin/clickhouse-server(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x22) [0x3019952]            
2. /usr/bin/clickhouse-server(DB::Context::getTableImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DB::Exception*) const+0x1f3) [0x708b523]                                                                                   
3. /usr/bin/clickhouse-server(DB::Context::getTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x73) [0x708b9f3]
4. /usr/bin/clickhouse-server(DB::InterpreterInsertQuery::getTable(DB::ASTInsertQuery const&)+0x191) [0x70edae1]                                                                 
5. /usr/bin/clickhouse-server(DB::InterpreterInsertQuery::execute()+0x63) [0x70ee8c3]
6. /usr/bin/clickhouse-server() [0x71a0727]
7. /usr/bin/clickhouse-server(DB::executeQuery(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum)+0x8a) [0x71a245a]
8. /usr/bin/clickhouse-server(DB::TCPHandler::runImpl()+0x4d9) [0x3027179]
9. /usr/bin/clickhouse-server(DB::TCPHandler::run()+0x2b) [0x302836b]
10. /usr/bin/clickhouse-server(Poco::Net::TCPServerConnection::start()+0xf) [0x96107df]
11. /usr/bin/clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x16a) [0x9610bba]
12. /usr/bin/clickhouse-server(Poco::PooledThread::run()+0x77) [0x972d6b7]
13. /usr/bin/clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x38) [0x9729228]
14. /usr/bin/clickhouse-server() [0x9e659ef]
15. /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f0dbe3286db]
16. /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f0dbd8a788f]


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/clickhouse-mysql-data-reader/clickhouse_mysql/tablemigrator.py", line 194, in migrate_one_table_data                                                               
    self.chwriter.insert(event)
  File "/root/clickhouse-mysql-data-reader/clickhouse_mysql/writer/chwriter.py", line 88, in insert                                                                              
    sys.exit(0)
SystemExit: 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/clickhouse-mysql-data-reader/clickhouse_mysql/main.py", line 140, in run
    migrator.migrate_all_tables_data()
  File "/root/clickhouse-mysql-data-reader/clickhouse_mysql/tablemigrator.py", line 160, in migrate_all_tables_data                                                              
    self.migrate_one_table_data(db=db, table=table)
  File "/root/clickhouse-mysql-data-reader/clickhouse_mysql/tablemigrator.py", line 200, in migrate_one_table_data                                                               
    self.host,
AttributeError: 'TableMigrator' object has no attribute 'host'
=============
'TableMigrator' object has no attribute 'host'

sql:

python3 -m clickhouse_mysql.main \
    --src-server-id=1 \
    --src-wait \
    --nice-pause=1 \
    --src-host=172.31.x.x \
    --src-user=reader \
    --src-password=xx \
    --src-tables=x.x \
    --dst-host=192.168.x.x \
    --dst-schema=chain \
    --dst-table=chain_tx.chain_tx \
    --migrate-table \
    --pump-data \
    --csvpool

Special characters in clickhouse password causes CSV dump to fail

If one uses a clickhouse account password with special characters and tries to use pump-data + CSV, during the tail | clickhouse-client call (thru Bash), the password would be escaped by Bash and causes authentication failure.

2020-01-24 10:38:10,209/1579891090.209181:INFO:starting clickhouse-client process
2020-01-24 10:38:10,209/1579891090.209221:DEBUG:starting tail -n +2 'yyyyyyy_1579891090.202464_12a560ba-7819-4d01-a114-053757d98264.csv' | clickhouse-client --host=127.0.0.1 --port=9000 --user=default --password=xxxxxxxx --query='INSERT INTO ....... FORMAT CSV' Code: 193. DB::Exception: Received from 127.0.0.1:9000. DB::Exception: Wrong password for user default.

tail: write error: Broken pipe
2020-01-24 10:38:10,230/1579891090.230426:DEBUG:class:<class 'clickhouse_mysql.writer.processwriter.ProcessWriter'> process() done
2020-01-24 10:38:11,225/1579891091.225788:DEBUG:Check events in binlog stream

A propose fix would be to single quote the password during argument construction (

choptions += " --password=" + self.password
)

            if self.password:
                choptions += " --password='" + self.password + "'"
            bash = "tail -n +2 '{0}' | clickhouse-client {1} --query='{2}'".format(
                event.filename,
                choptions,
                sql,
            )

Thanks

unrecognized arguments: --dst-create-table --migrate-table --pump-data

root@coin-base:~/clickhouse-mysql-data-reader# clickhouse-mysql \
>     --src-server-id=1 \
>     --src-wait \
>     --nice-pause=1 \
>     --src-host=172.31.220.47 \
>     --src-user=reader \
>     --src-password=aaa \
>     --src-tables=chain_tx.chain_tx \
>     --dst-host=127.0.0.1 \
>     --dst-create-table \
>     --migrate-table \
>     --pump-data \
>     --csvpool

any hints?

error occurs while connecting source mysql instance without default port

When I connection to source mysql instance ,which's port is 30307 . But mysql instance port is 3306 , no error. I think the parameter --src-port=30307 is invalid .
``
./bin/pypy ./bin/clickhouse-mysql --log-file=/dev/null --src-host=10.105.2.121 --src-user=reader --src-password=qwerty --src-port=30307 --table-templates --src-schemas=linyue --src-tables=t

Traceback (most recent call last):
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/tableprocessor.py", line 82, in connect
cursorclass=self.cursorclass,
File "/opt/pypy3-v5.10.0-linux64/site-packages/MySQLdb/init.py", line 86, in Connect
return Connection(*args, **kwargs)
File "/opt/pypy3-v5.10.0-linux64/site-packages/MySQLdb/connections.py", line 204, in init
super(Connection, self).init(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '10.105.2.121' (111)")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/main.py", line 40, in run
templates = self.config.table_builder().templates(self.config.is_table_templates_json())
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/tablebuilder.py", line 23, in templates
templates[db][table] = self.create_table_description(db=db, table=table, json=json)
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/tablebuilder.py", line 35, in create_table_description
columns_description = self.create_table_columns_description(db=db, table=table)
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/tablebuilder.py", line 85, in create_table_columns_description
self.connect(db=db)
File "/opt/pypy3-v5.10.0-linux64/site-packages/clickhouse_mysql/tableprocessor.py", line 90, in connect
db
Exception: Can not connect to the database host=10.105.2.121 user=reader password=qwerty db=linyue

Can not connect to the database host=10.105.2.121 user=reader password=qwerty db=linyue
``

Type mismatch in VALUES section

From log

Expected=8.  Actual=2.  Position: 187.  Data Length: 189
2020-05-22 05:36:29,351/1590125789.351507:CRITICAL:QUERY FAILED
2020-05-22 05:36:29,351/1590125789.351809:CRITICAL:ex=Code: 53. Type mismatch in VALUES section. Repeat query with types_check=True for detailed info. Column itemid: required argument is not an integer
2020-05-22 05:36:29,351/1590125789.351931:CRITICAL:sql=INSERT INTO `zabbix_i88`.`history_uint` (`itemid`, `clock`, `value`, `ns`) VALUES
2020-05-22 05:37:30,412/1590125850.412596:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:37:30,413/1590125850.413086:WARNING:Result length not requested length:
Expected=8.  Actual=2.  Position: 187.  Data Length: 189
2020-05-22 05:37:31,451/1590125851.451168:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:37:31,451/1590125851.451454:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:37:32,814/1590125852.814633:CRITICAL:QUERY FAILED
2020-05-22 05:37:32,815/1590125852.815046:CRITICAL:ex=Code: 53. Type mismatch in VALUES section. Repeat query with types_check=True for detailed info. Column itemid: required argument is not an integer
2020-05-22 05:37:32,815/1590125852.815285:CRITICAL:sql=INSERT INTO `zabbix_i88`.`history_uint` (`itemid`, `clock`, `value`, `ns`) VALUES
2020-05-22 05:38:33,900/1590125913.900565:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:38:33,900/1590125913.900909:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:38:34,910/1590125914.910961:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:38:34,911/1590125914.911283:WARNING:Result length not requested length:
Expected=8.  Actual=1.  Position: 813.  Data Length: 814
2020-05-22 05:38:35,985/1590125915.985317:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:38:35,985/1590125915.985610:WARNING:Result length not requested length:
Expected=8.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:38:37,382/1590125917.382241:CRITICAL:QUERY FAILED
2020-05-22 05:38:37,382/1590125917.382560:CRITICAL:ex=Code: 53. Type mismatch in VALUES section. Repeat query with types_check=True for detailed info. Column itemid: required argument is not an integer
2020-05-22 05:38:37,382/1590125917.382715:CRITICAL:sql=INSERT INTO `zabbix_i88`.`history_uint` (`itemid`, `clock`, `value`, `ns`) VALUES
2020-05-22 05:39:38,443/1590125978.443542:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:39:38,443/1590125978.443890:WARNING:Result length not requested length:
Expected=8.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:39:39,462/1590125979.462185:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:39:39,462/1590125979.462625:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 812.  Data Length: 814
2020-05-22 05:39:40,536/1590125980.536540:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:39:40,536/1590125980.536929:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:39:41,932/1590125981.932213:CRITICAL:QUERY FAILED
2020-05-22 05:39:41,932/1590125981.932589:CRITICAL:ex=Code: 53. Type mismatch in VALUES section. Repeat query with types_check=True for detailed info. Column itemid: required argument is not an integer
2020-05-22 05:39:41,932/1590125981.932733:CRITICAL:sql=INSERT INTO `zabbix_i88`.`history_uint` (`itemid`, `clock`, `value`, `ns`) VALUES
2020-05-22 05:40:43,056/1590126043.056870:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:40:43,057/1590126043.057307:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:40:44,073/1590126044.073156:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:40:44,073/1590126044.073462:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 812.  Data Length: 814
2020-05-22 05:40:45,141/1590126045.141290:WARNING:Got an exception, skip it in blocking mode
2020-05-22 05:40:45,141/1590126045.141573:WARNING:Result length not requested length:
Expected=4.  Actual=2.  Position: 8212.  Data Length: 8214
2020-05-22 05:40:46,519/1590126046.519466:CRITICAL:QUERY FAILED
2020-05-22 05:40:46,519/1590126046.519791:CRITICAL:ex=Code: 53. Type mismatch in VALUES section. Repeat query with types_check=True for detailed info. Column itemid: required argument is not an integer
2020-05-22 05:40:46,519/1590126046.519940:CRITICAL:sql=INSERT INTO `zabbix_i88`.`history_uint` (`itemid`, `clock`, `value`, `ns`) VALUES

From binlog (this sample position)

/*!80011 SET @@session.default_collation_for_utf8mb4=255*//*!*/;
BEGIN
/*!*/;
# at 90742723
#200522  4:05:37 server id 1  end_log_pos 90742790 CRC32 0xaea15f3c     Table_map: `zabbix_i88`.`history_uint` mapped to number 3568
# at 90742790
#200522  4:05:37 server id 1  end_log_pos 90742978 CRC32 0xebdfe8cd     Write_rows: table id 3568
# at 90742978
#200522  4:05:37 server id 1  end_log_pos 90743116 CRC32 0x563c8236     Write_rows: table id 3568 flags: STMT_END_F
### Extra row info for partitioning: partition: 31
### INSERT INTO `zabbix_i88`.`history_uint`
### SET
###   @1=31109 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @2=1590075509 /* INT meta=0 nullable=0 is_null=0 */
###   @3=2693000000 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @4=603240000 /* INT meta=0 nullable=0 is_null=0 */
### INSERT INTO `zabbix_i88`.`history_uint`
### SET
###   @1=31109 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @2=1590079109 /* INT meta=0 nullable=0 is_null=0 */
###   @3=2693000000 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @4=314724351 /* INT meta=0 nullable=0 is_null=0 */
### INSERT INTO `zabbix_i88`.`history_uint`
### SET
###   @1=31109 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @2=1590082709 /* INT meta=0 nullable=0 is_null=0 */
###   @3=2693000000 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @4=818740007 /* INT meta=0 nullable=0 is_null=0 */
### INSERT INTO `zabbix_i88`.`history_uint`
### SET
###   @1=31109 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @2=1590086309 /* INT meta=0 nullable=0 is_null=0 */
###   @3=2693000000 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @4=254728503 /* INT meta=0 nullable=0 is_null=0 */
### INSERT INTO `zabbix_i88`.`history_uint`
### SET
###   @1=31109 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @2=1590089909 /* INT meta=0 nullable=0 is_null=0 */
###   @3=2693000000 /* LONGINT meta=0 nullable=0 is_null=0 */
###   @4=219237073 /* INT meta=0 nullable=0 is_null=0 */

Config file

log_file=/var/log/clickhouse-mysql/main.log
log_level=WARNING
nice_pause=1  #пауза в сек. между попытками чтения binlog
#dry=yes
daemon=False
pid_file=/var/run/clickhouse-client.pid
binlog_position_file=/var/log/clickhouse-mysql/clickhouse-mysql-binlog.pos

mempool=yes
mempool_max_events_num=10000
mempool_max_rows_num=1000
mempool_max_flush_interval=5
csvpool=no
csvpool_file_path_prefix=/tmp/clickhouse-mysql/zbx__
csvpool_keep_files=no

#migrate_table=yes
#Copy existing data from MySQL table(s) with SELECT statement
#Binlog is not read during this procedure - just copy data from the src table(s). 

pump_data=yes
#Use in combination with --src-wait in case would like to continue and wait for new rows 
src_wait=yes
src_resume=yes # продолжить репликацию если задан binlog_position_file

#
# src section
#
src_server_id=1
src_host=172.31.135.38
src_port=3306
src_user="reader"
src_password="**************8"
src_schemas="zabbix_i88"
src_tables=history_log,history,history_str,history_text,history_uint,trends,trends_uint

#src_file=
#src_binlog_file=mysql-bin.000001 #стартовая позиция binlog, если не задан - берется из binlog_position_file
#src_binlog_position=28579794

#
# dst section
#
dst_host=172.*******
dst_port=9000
dst_user="zabbix"
dst_password="***********#"
dst_schema="zabbix_i88"
dst_create_table=no
dst_table_prefix=""

create table history_uint
(
	itemid bigint unsigned not null,
	clock int default 0 not null,
	value bigint unsigned default 0 not null,
	ns int default 0 not null
);
create index history_uint_1
	on history_uint (itemid, clock);

trying to import data from mysql but failing

I have configured both mysql and clickhouse, when I try to import data I get the following error:

2021-04-27 12:08:01,107/1619507281.107731:DEBUG:class:<class 'clickhouse_mysql.writer.chwriter.CHWriter'> insert 1 event(s)
2021-04-27 12:08:01,481/1619507281.481600:DEBUG:class:<class 'clickhouse_mysql.writer.chwriter.CHWriter'> insert 100000 row(s)
2021-04-27 12:08:01,481/1619507281.481838:DEBUG:schema=vpbx table=incoming_calls self.dst_schema=None self.dst_table=None
2021-04-27 12:08:01,482/1619507281.482273:DEBUG:Connecting. Database: default. User: default
2021-04-27 12:08:01,482/1619507281.482419:DEBUG:Connecting to 127.0.0.1:9000
2021-04-27 12:08:01,484/1619507281.484395:DEBUG:Connected to ClickHouse server version 21.3.4, revision: 54447
2021-04-27 12:08:01,485/1619507281.485070:DEBUG:Query: INSERT INTO `vpbx`.`incoming_calls` (`id`, `client_id`, `date`, `time`, `cids`, `datetime`, `did_num`, `line_status`, `duration`, `billing_minutes`, `Last_App`, `Last_App_ID`, `Last_App_Data`, `call-status`, `dtmf`, `Caller_ID`, `Recd_File_Path`, `agent_name`, `agent_no`, `Lead_Check`) VALUES
2021-04-27 12:08:01,485/1619507281.485458:DEBUG:Block "" send time: 0.000073
2021-04-27 12:08:01,904/1619507281.904045:CRITICAL:QUERY FAILED
2021-04-27 12:08:01,904/1619507281.904250:CRITICAL:ex='NoneType' object has no attribute 'encode'
2021-04-27 12:08:01,904/1619507281.904330:CRITICAL:sql=INSERT INTO `vpbx`.`incoming_calls` (`id`, `client_id`, `date`, `time`, `cids`, `datetime`, `did_num`, `line_status`, `duration`, `billing_minutes`, `Last_App`, `Last_App_ID`, `Last_App_Data`, `call-status`, `dtmf`, `Caller_ID`, `Recd_File_Path`, `agent_name`, `agent_no`, `Lead_Check`) VALUES

Please advise.

Lost connection to MySQL server during query

While trying to import data from mysql table to clickhouse, the execution stops with this error:

VALUES
2021-06-18 22:58:56,737/1624057136.737545:DEBUG:Block "" send time: 0.000042
2021-06-18 22:59:40,788/1624057180.788412:DEBUG:Block "" send time: 37.195755
2021-06-18 22:59:40,798/1624057180.798683:DEBUG:Block "" send time: 0.006171
2021-06-18 23:02:19,345/1624057339.345471:CRITICAL:Critical error: (2013, 'Lost connection to MySQL server during query')
2021-06-18 23:02:19,349/1624057339.349685:CRITICAL:Can not migrate table on db=XXX table=XXX
=============
Traceback (most recent call last):
  File "/home/ec2-user/.local/lib/python3.7/site-packages/clickhouse_mysql/tablemigrator.py", line 203, in migrate_one_table_data
    rows = self.client.cursor.fetchmany(self.pool_max_rows_num)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 435, in fetchmany
    r = self._fetch_row(size or self.arraysize)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 328, in _fetch_row
    return self._result.fetch_row(size, self._fetch_type)
MySQLdb._exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ec2-user/.local/lib/python3.7/site-packages/clickhouse_mysql/main.py", line 140, in run
    migrator.migrate_all_tables_data()
  File "/home/ec2-user/.local/lib/python3.7/site-packages/clickhouse_mysql/tablemigrator.py", line 177, in migrate_all_tables_data
    self.migrate_one_table_data(db=db, table=table)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/clickhouse_mysql/tablemigrator.py", line 220, in migrate_one_table_data
    table,
Exception: Can not migrate table on db=ns_dev_data_db table=cdm
=============
Can not migrate table on db=XXX table=XXX
[ec2-user@ip-172-31-19-8 ~]$ 

Any param I can change to import smaller batches or something?

Connect to clickhouse with a TLS

It is required to connect to the instance of Clickhouse in Yandex.Cloud.
We need TLS (with certificate), username and password. Is it possible?

Enum fileds are not supported by create table

2019-06-21 14:57:36,391/1561129056.391024:DEBUG:Query: CREATE TABLE IF NOT EXISTS DEV.candidates (
id String,
created_at DateTime,
updated_at Nullable(DateTime),
last_name String,
first_name String,
other_name Nullable(String),
gender Enum16,
nationality Enum16,
date_born Date,
address String,
phone String,
other_phone String,
email Nullable(String),
education_id String,
user_id String,
search_full_name String,
by_robot Nullable(Int8)
)

2019-06-21 14:57:36,391/1561129056.391306:DEBUG:Block send time: 0.000087
2019-06-21 14:57:36,392/1561129056.392637:CRITICAL:Code: 92.
DB::Exception: Enum data type cannot be empty. Stack trace:

script don't get enum values from mysql

failed with error 'NoneType' object has no attribute 'encode'

2018-01-19 14:59:18,028/1516373958.028013:CRITICAL:QUERY FAILED
2018-01-19 14:59:18,028/1516373958.028283:CRITICAL:ex='NoneType' object has no attribute 'encode'
2018-01-19 14:59:18,028/1516373958.028551:CRITICAL:sql=INSERT INTO record_innodb.mtrack_record_201108 (record_id, record_campId, record_affiliateId, record_creativeId, record_time, record_cookie, record_IP, record_browser, record_SUBID1, record_SUBID2, record_SUBID3, record_SUBID4, record_SUBID5, record_click, record_clickTime, record_referrer, record_affOptInfo, record_country, record_region, record_city, record_post_code, record_impression, record_impressionTime, record_lead, record_leadTime, record_weGet, record_theyGet, record_adjust1, record_adjust2, record_adjust3) VALUES
2018-01-19 14:59:18,029/1516373958.029071:CRITICAL:rows=[{'record_id': 186093083305, 'record_campId': 33472, 'record_affiliateId': 42609, 'record_creativeId': 404668, 'record_time': datetime.datetime(2018, 1, 19, 15, 59, 8), 'record_cookie': 'a2dd4bd8-b577-5158-9718-c41f8e720de6', 'record_IP': '171.253.190.70', 'record_browser': 'Mozilla/5.0 (Linux; Android 6.0; mobiistar PRIME X1 Build/MRA58K; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/63.0.3239.111 Mobile Safari/537.36', 'record_SUBID1': '4ae53129-609b-4d97-b51c-1b727bb70949-1516373947991', 'record_SUBID2': '2152', 'record_SUBID3': '', 'record_SUBID4': '', 'record_SUBID5': '', 'record_click': 1, 'record_clickTime': datetime.datetime(2018, 1, 19, 15, 59, 8), 'record_referrer': '', 'record_affOptInfo': '', 'record_country': 'VN', 'record_region': '20', 'record_city': 'Ho Chi Minh City', 'record_post_code': None, 'record_impression': 1, 'record_impressionTime': datetime.datetime(1970, 1, 1, 0, 0), 'record_lead': 0, 'record_leadTime': datetime.datetime(1970, 1, 1, 0, 0), 'record_weGet': Decimal('0.00'), 'record_theyGet': Decimal('0.00'), 'record_adjust1': 0, 'record_adjust2': 0, 'record_adjust3': 0}]
2018-01-19 14:59:18,029/1516373958.029552:INFO:start 1516373956
2018-01-19 14:59:18,029/1516373958.029826:INFO:end 1516373958
2018-01-19 14:59:18,030/1516373958.030097:INFO:len 2

No errors in case of MySQL schema mismatch

Hello,

Just found an issue: we've added a new column to the table being replicated via clickhouse-mysql to Clickhouse. Clickhouse did not have that column.

The replication immediately stopped but there were no error messages in the log. In fact, everything was exactly as during normal operations.

The loglevel is set to 'INFO'.

Multi instance for clickhouse-mysql

Hi,

Currently clickhouse-mysql utility works as a service with one instance only (can handle only one mysql table).

We need to have the possibility to define more than one instance in the config file so the service will be able to sync multiple tables.

run clickhouse-mysql get error ImportError: cannot import name 'charset_to_encoding' from 'pymysql.charset'

[root@mytestuse pymysql]# clickhouse-mysql
Traceback (most recent call last):
File "/usr/local/bin/clickhouse-mysql", line 5, in
from clickhouse_mysql import main
File "/usr/local/python3/lib/python3.7/site-packages/clickhouse_mysql/init.py", line 4, in
from .main import Main
File "/usr/local/python3/lib/python3.7/site-packages/clickhouse_mysql/main.py", line 21, in
from clickhouse_mysql.config import Config
File "/usr/local/python3/lib/python3.7/site-packages/clickhouse_mysql/config.py", line 4, in
from clickhouse_mysql.reader.mysqlreader import MySQLReader
File "/usr/local/python3/lib/python3.7/site-packages/clickhouse_mysql/reader/mysqlreader.py", line 8, in
from pymysqlreplication import BinLogStreamReader
File "/usr/local/python3/lib/python3.7/site-packages/pymysqlreplication/init.py", line 23, in
from .binlogstream import BinLogStreamReader
File "/usr/local/python3/lib/python3.7/site-packages/pymysqlreplication/binlogstream.py", line 10, in
from .packet import BinLogPacketWrapper
File "/usr/local/python3/lib/python3.7/site-packages/pymysqlreplication/packet.py", line 7, in
from pymysqlreplication import constants, event, row_event
File "/usr/local/python3/lib/python3.7/site-packages/pymysqlreplication/row_event.py", line 9, in
from pymysql.charset import charset_to_encoding
ImportError: cannot import name 'charset_to_encoding' from 'pymysql.charset' (/usr/local/python3/lib/python3.7/site-packages/pymysql/charset.py)
[root@mytestuse pymysql]# pip3 list
Package Version


clickhouse-driver 0.1.4
clickhouse-mysql 0.0.20200128
configobj 5.0.6
mysql-replication 0.21
mysqlclient 2.0.1
pip 20.2.3
PyMySQL 0.10.0
pytz 2020.1
setuptools 47.1.0
six 1.15.0
tzlocal 2.1
[root@mytestuse pymysql]#

Connection via proxysql

Is there way to work through a proxysql ? The standart schema is

clickhouse ---> proxysql ---> mysql master 1

Can't call datareader as an app? Install option not working...

Hi!

It seems I can't run data reader as a server.

which clickhouse-mysql
/usr/local/bin/clickhouse-mysql

/usr/local/bin/clickhouse-mysql --install
Traceback (most recent call last):
File "/usr/local/bin/clickhouse-mysql", line 5, in
from clickhouse_mysql import main
File "/usr/local/lib/python3.8/dist-packages/clickhouse_mysql/init.py", line 4, in
from .main import Main
File "/usr/local/lib/python3.8/dist-packages/clickhouse_mysql/main.py", line 21, in
from clickhouse_mysql.config import Config
File "/usr/local/lib/python3.8/dist-packages/clickhouse_mysql/config.py", line 4, in
from clickhouse_mysql.reader.mysqlreader import MySQLReader
File "/usr/local/lib/python3.8/dist-packages/clickhouse_mysql/reader/mysqlreader.py", line 8, in
from pymysqlreplication import BinLogStreamReader
File "/usr/local/lib/python3.8/dist-packages/pymysqlreplication/init.py", line 23, in
from .binlogstream import BinLogStreamReader
File "/usr/local/lib/python3.8/dist-packages/pymysqlreplication/binlogstream.py", line 9, in
from pymysql.util import int2byte
ModuleNotFoundError: No module named 'pymysql.util'

Ideas how to solve? OS - Ubuntu 20.04.2, MySQL 5.7.
Thanks!

broken when try import the data from mysql to clickhouse

hi, I try use clickhouse-mysql-data-reader to set the mysql - click house replication, but when I start click-mysql it fail to import data ,the row 5 data cause click-mysql broken ,I do not know why?

2019-01-16 15:03:01,284/1547622181.284754:DEBUG:Query: CREATE TABLE IF NOT EXISTS `xxx`.`xxx` (
    `primary_date_field` Date default today(),
    `id` Int32,
    `creator` Nullable(Int32),
    `modifier` Nullable(Int32),
    `gmt_create` DateTime,
    `pay_time` Nullable(DateTime),
    `gmt_modified` DateTime,
    `is_deleted` Nullable(String),
    `shop_id` Int32,
    `order_type` Int32,
    `parent_id` Nullable(Int32),
    `order_sn` String,
    `customer_id` Nullable(Int32),
    `customer_car_id` Int32,
    `expected_time` Nullable(DateTime),
    `goods_amount` Nullable(String),
    `service_amount` Nullable(String),
    `tax_amount` Nullable(String),
    `total_amount` Nullable(String),
    `postscript` Nullable(String),
    `order_status` String,
    `car_license` String,
    `car_brand_id` Nullable(Int32),
    `car_series_id` Nullable(Int32),
    `car_power_id` Nullable(Int32),
    `car_year_id` Nullable(Int32),
    `car_models_id` Nullable(Int32),
    `car_brand` Nullable(String),
    `car_series` Nullable(String),
    `car_power` Nullable(String),
    `car_year` Nullable(String),
    `car_models` Nullable(String),
    `car_company` Nullable(String),
    `import_info` Nullable(String),
    `customer_name` Nullable(String),
    `customer_mobile` Nullable(String),
    `vin` Nullable(String),
    `engine_no` Nullable(String),
    `receiver` Int32,
    `invoice_type` Nullable(Int32),
    `discount` Nullable(String),
    `receiver_name` Nullable(String),
    `operator_name` Nullable(String),
    `order_amount` Nullable(String),
    `pay_status` UInt8,
    `finish_time` Nullable(DateTime),
    `goods_count` Nullable(Int32),
    `service_count` Nullable(Int32),
    `car_alias` Nullable(String),
    `pre_discount_rate` Nullable(String),
    `pre_tax_amount` Nullable(String),
    `pre_preferentia_amount` Nullable(String),
    `pre_coupon_amount` Nullable(String),
    `pre_total_amount` Nullable(String),
    `pay_amount` Nullable(String),
    `sign_amount` Nullable(String),
    `other_insurance_company_id` Nullable(Int32),
    `other_insurance_company_name` Nullable(String),
    `goods_discount` Nullable(String),
    `service_discount` Nullable(String),
    `fee_amount` Nullable(String),
    `fee_discount` Nullable(String),
    `contact_name` Nullable(String),
    `contact_mobile` Nullable(String),
    `insurance_company_id` Nullable(Int32),
    `insurance_company_name` Nullable(String),
    `mileage` Nullable(String),
    `car_color` Nullable(String),
    `buy_time` Nullable(DateTime),
    `customer_address` Nullable(String),
    `oil_meter` Nullable(String),
    `is_notice` Nullable(Int8),
    `is_visit` Nullable(Int32),
    `is_lock` Nullable(String),
    `img_url` Nullable(String),
    `order_tag` UInt8,
    `ver` Nullable(String),
    `refer` UInt8,
    `upkeep_mileage` Nullable(String),
    `create_time` DateTime,
    `proxy_type` Int8,
    `channel_id` Int32,
    `channel_name` String,
    `order_discount_amount` Nullable(String),
    `confirm_time` DateTime,
    `expect_time` Nullable(DateTime),
    `down_payment` String,
    `company` Nullable(String)
) ENGINE = MergeTree(primary_date_field, (id,gmt_create,gmt_modified,order_type,order_sn,customer_car_id,order_status,car_license,receiver,pay_status,confirm_time), 8192)

......

Row 5:
Column 0,   name: buy_time,                     type: Nullable(DateTime), parsed text: ",,??,32,,,?A4369G,C"
ERROR: garbage after Nullable(DateTime): "addy [??],"



tail: write error: Broken pipe
2019-01-16 15:03:01,895/1547622181.895748:DEBUG:class:<class 'clickhouse_mysql.writer.processwriter.ProcessWriter'> process() done


I checked the csv file ,the row 5 is the following text

47352 1 1 2018-05-10 17:59:07 2018-05-10 17:59:07 2018-05-10 17:59:07 N 1 0 0 C000011805100001 411715 3455 20 \N 12.00 100.00 0.00 112.00 测试11 DDYFK 苏A4369G 32 446 0 0 84597 大众 (进口) Caddy [开迪] Caddy [开迪] \N wang 1G1BL52P7TR115520 1 0 0.00 tqmall tqmall 112.00 1 \N 1 1 1.00000000 0.00 0.00 0.00 112.00 112.00 112.00 0 \N 0.00 0.00 0.00 0.00 wang 0 \N \N \N 0 0 N \N 3 \N 0 2018-05-10 17:57:00 0 0 0.00 2018-05-10 17:59:07 \N 0.00

errors while replicating

I am trying to replication the table from MySQL

CREATE TABLE `acct` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `stamp_inserted` datetime NOT NULL,
  `ip_src` char(45) NOT NULL,
  `ip_dst` char(45) NOT NULL,
  `src_port` int(2) unsigned NOT NULL,
  `dst_port` int(2) unsigned NOT NULL,
  `in_bytes` int(10) unsigned NOT NULL,
  `out_bytes` int(10) unsigned NOT NULL,
  `src_host` varchar(1024) DEFAULT NULL,
  `dst_host` varchar(1024) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1391121 DEFAULT CHARSET=latin1

to a ClickHouse table

CREATE TABLE default.acct ( id UInt32,  stamp_inserted DateTime,  ip_src String,  ip_dst String,  src_port UInt16,  dst_port UInt16,  in_bytes UInt32,  out_bytes UInt32,  src_host String,  dst_host String) ENGINE = MergeTree PARTITION BY toMonday(stamp_inserted) ORDER BY (stamp_inserted, ip_dst) SETTINGS index_granularity = 8192

my script is

python3 -m clickhouse_mysql.main \
    --src-resume \
    --src-wait \
    --nice-pause=1 \
    --log-level=info \
    --log-file=acct.log \
    --src-host=127.0.0.1 \
    --src-user=reader \
    --src-password=qwerty \
    --dst-host=127.0.0.1 \
    --csvpool \
    --csvpool-file-path-prefix=qwe_ \
    --mempool-max-flush-interval=60 \
    --mempool-max-events-num=10000

while it is running and the events are coming to MySQL, the script fails as:

Process Process-1:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/writer/processwriter.py", line 30, in process   
    writer.insert(event_or_events)
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/writer/csvwriter.py", line 83, in insert
    self.fieldnames = sorted(events[0].column_names())
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/event/event.py", line 53, in column_names
    return self.mysql_event.rows[0]['values'].keys()
IndexError: list index out of range
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/writer/processwriter.py", line 30, in process
    writer.insert(event_or_events)
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/writer/csvwriter.py", line 83, in insert
    self.fieldnames = sorted(events[0].column_names())
  File "/root/clickhousr/click-mysql/clickhouse-mysql-data-reader/clickhouse_mysql/event/event.py", line 53, in column_names
    return self.mysql_event.rows[0]['values'].keys()
IndexError: list index out of range

the log is

2017-12-25 13:15:31,936/1514236531.936720:INFO:Starting
2017-12-25 13:15:31,940/1514236531.940504:INFO:raw dbs list len=0
2017-12-25 13:15:31,941/1514236531.941264:INFO:dbs list len=0
2017-12-25 13:15:31,941/1514236531.941383:INFO:raw tables list len=0
2017-12-25 13:15:31,941/1514236531.941494:INFO:tables list len=0
2017-12-25 13:16:24,682/1514236584.682820:INFO:rot now:1514236584.682676 bktttl:1 bktitemsttl: 13 index:t1.acct reason:FLUSH bktsonbelt:2 bktsize:13 beltnum:1
2017-12-25 13:16:44,976/1514236604.976741:INFO:rot now:1514236604.976535 bktttl:2 bktitemsttl: 52 index:t1.acct reason:FLUSH bktsonbelt:2 bktsize:39 beltnum:1
2017-12-25 13:16:44,980/1514236604.980124:INFO:PERF - buckets_per_sec:0.049276 items_per_sec:1.921764 for last 20 sec

Select query with multiple joins won't work

I need some help.

I have managed to migrate some of MySQL 5.1 tables to Clickhouse.

Basic select query works fine with 1 inner join.

However I noticed if I have more than 1 join e.g.

select
from table1
inner join table2
on table1.a = table2.a
inner join table3
... (more joins left join or inner joins)

This won't work in ClickHouse (it works in MySQL)

Do you have any recommendations how to rewrite this query inside ClickHouse? Otherwise what is the best approach to do this?

Thanks.

NoneType' object has no attribute 'tzinfo'

2020-03-17 10:33:23,427/1584441203.427092:DEBUG:Connected to ClickHouse server version 20.3.2, revision: 54433
2020-03-17 10:33:23,527/1584441203.527656:DEBUG:Query: INSERT INTO salaryboard.experiences_2 (id, internal_id, source, source_id, source_last_updated, employee_id, location, job_title_user_input, sbji_id, sbji_wi, sbji_wv, l1_code, l2_code, l3_code, l4_code, l5_code, onet_l5, onet_l6, company_user_input, company_id, company, industry_user_input, industry_id, industry, sb_updated, created_at, updated_at, start_date, start_month, start_year, end_month, end_year, end_date, duration, grade_prefix, current_position, employment_type, performance_appraisal_rating, has_bonus, has_aws, has_sales_commission, has_tips, has_stock, bonus_amount, bonus_currency, aws_amount, aws_currency, sales_commission_amount, sales_commission_currency, tips_amount, tips_currency, owner_type, owner, owner_id, status, auto_match_sbci_done, auto_match_sbji_done, instance, sbgi_l5, sbgi_l4, sbgi_l3, sbgi_l2, sbgi_l1, latitude, longitude, auto_match_sbgi_done) VALUES
2020-03-17 10:33:23,528/1584441203.528208:DEBUG:Block send time: 0.000047
2020-03-17 10:33:24,864/1584441204.864394:CRITICAL:QUERY FAILED
2020-03-17 10:33:24,864/1584441204.864563:CRITICAL:ex='NoneType' object has no attribute 'tzinfo'

bug:When add the "--migrate-table" option ,get error

The command is :

clickhouse-mysql
--src-server-id=33
--src-wait
--nice-pause=1
--src-host=10.9.168.55
--src-port=3308
--src-user=root
--src-password=root
--src-tables=ywsdb.tn1
--dst-host=10.9.168.55
--dst-user=root
--dst-password=kPI+n5Z/
--dst-create-table
--migrate-table
--pump-data
--csvpool

when I run this command,I got error:

2019-04-10 17:11:42,675/1554887502.675406:INFO:Starting
2019-04-10 17:11:42,675/1554887502.675526:DEBUG:{'app': {'binlog_position_file': '/tmp/clickhouse-mysql-binlog.pos',
'config_file': '/etc/clickhouse-mysql/clickhouse-mysql.conf',
'create_table_json_template': False,
'create_table_sql': 'yes',
'create_table_sql_template': False,
'csvpool': True,
'daemon': False,
'dry': False,
'install': False,
'log_file': None,
'log_level': 10,
'mempool': True,
'mempool_max_events_num': '10000',
'mempool_max_flush_interval': '60',
'mempool_max_rows_num': 100000,
'migrate_table': True,
'pid_file': '/tmp/reader.pid',
'pump_data': True,
'with_create_database': 'yes'},
'converter': {'clickhouse': {'column_skip': None,
'converter_class': None,
'converter_file': None},
'csv': {'column_default_value': None, 'column_skip': None}},
'reader': {'file': {'csv_file_path': None, 'nice_pause': 1},
'mysql': {'blocking': True,
'connection_settings': {'host': '10.9.168.55',
'password': 'root',
'port': 3308,
'user': 'root'},
'log_file': None,
'log_pos': None,
'nice_pause': 1,
'resume_stream': True,
'schemas': None,
'server_id': 33,
'tables': ['ywsdb.tn1'],
'tables_prefixes': None}},
'table_builder': {'clickhouse': {'connection_settings': {'host': '10.9.168.55',
'password': 'kPI+n5Z/',
'port': 9000,
'user': 'root'},
'dst_create_table': True},
'mysql': {'dbs': None,
'host': '10.9.168.55',
'password': 'root',
'port': 3308,
'tables': ['ywsdb.tn1'],
'tables_prefixes': None,
'user': 'root'}},
'table_migrator': {'clickhouse': {'connection_settings': {'host': '10.9.168.55',
'password': 'kPI+n5Z/',
'port': 9000,
'user': 'root'},
'dst_create_table': True,
'dst_schema': None,
'dst_table': None},
'mysql': {'dbs': None,
'host': '10.9.168.55',
'password': 'root',
'port': 3308,
'tables': ['ywsdb.tn1'],
'tables_prefixes': None,
'tables_where_clauses': None,
'user': 'root'}},
'writer': {'clickhouse': {'connection_settings': {'host': '10.9.168.55',
'password': 'kPI+n5Z/',
'port': 9000,
'user': 'root'},
'dst_schema': None,
'dst_table': None},
'file': {'csv_file_path': None,
'csv_file_path_prefix': 'qwe_',
'csv_file_path_suffix_parts': [],
'csv_keep_file': False,
'dst_schema': None,
'dst_table': None}}}
2019-04-10 17:11:42,679/1554887502.679106:INFO:sys.path
2019-04-10 17:11:42,679/1554887502.679307:INFO:['/usr/lib/python3.4/site-packages/clickhouse_mysql/converter',
'/usr/bin',
'/usr/lib64/python34.zip',
'/usr/lib64/python3.4',
'/usr/lib64/python3.4/plat-linux',
'/usr/lib64/python3.4/lib-dynload',
'/usr/lib64/python3.4/site-packages',
'/usr/lib/python3.4/site-packages']
2019-04-10 17:11:42,679/1554887502.679402:DEBUG:{'ywsdb': {'tn1'}} group tables for explicitly specified db/tables
2019-04-10 17:11:42,679/1554887502.679477:DEBUG:{} group tables for prefix specified db/tables
2019-04-10 17:11:42,690/1554887502.690932:DEBUG:Connect to the database host=10.9.168.55 port=3308 user=root password=root db=ywsdb
CREATE DATABASE IF NOT EXISTS ywsdb;
CREATE TABLE IF NOT EXISTS ywsdb.tn1 (
primary_date_field Date default today(),
id Int64,
name Nullable(String)
) ENGINE = MergeTree(primary_date_field, (id), 8192)
;
2019-04-10 17:11:42,692/1554887502.692182:DEBUG:CSVWriteConverter init()
2019-04-10 17:11:42,692/1554887502.692292:DEBUG:Converter init()
2019-04-10 17:11:42,692/1554887502.692338:DEBUG:[]
2019-04-10 17:11:42,692/1554887502.692388:INFO:PoolWriter()
2019-04-10 17:11:42,692/1554887502.692450:INFO:CHClient() connection_settings={'host': '10.9.168.55', 'user': 'root', 'password': 'kPI+n5Z/', 'port': 9000}
2019-04-10 17:11:42,692/1554887502.692600:DEBUG:{'ywsdb': {'tn1'}} group tables for explicitly specified db/tables
2019-04-10 17:11:42,692/1554887502.692659:DEBUG:{} group tables for prefix specified db/tables
2019-04-10 17:11:42,692/1554887502.692701:INFO:List for migration:
2019-04-10 17:11:42,692/1554887502.692797:INFO: ywsdb.tn1
2019-04-10 17:11:42,692/1554887502.692853:DEBUG:{'ywsdb': {'tn1'}} group tables for explicitly specified db/tables
2019-04-10 17:11:42,692/1554887502.692897:DEBUG:{} group tables for prefix specified db/tables
2019-04-10 17:11:42,701/1554887502.701284:DEBUG:Connect to the database host=10.9.168.55 port=3308 user=root password=root db=ywsdb
2019-04-10 17:11:42,702/1554887502.702169:INFO:Start migration ywsdb.tn1
CREATE DATABASE IF NOT EXISTS ywsdb;
2019-04-10 17:11:42,702/1554887502.702341:DEBUG:Connecting. Database: default. User: root
2019-04-10 17:11:42,706/1554887502.706179:DEBUG:Connected to ClickHouse server version 19.4.2, revision: 54416
2019-04-10 17:11:42,731/1554887502.731062:DEBUG:Query: CREATE DATABASE IF NOT EXISTS ywsdb
2019-04-10 17:11:42,731/1554887502.731652:DEBUG:Block send time: 0.000101
CREATE TABLE IF NOT EXISTS ywsdb.tn1 (
primary_date_field Date default today(),
id Int64,
name Nullable(String)
) ENGINE = MergeTree(primary_date_field, (id), 8192)
;
2019-04-10 17:11:42,732/1554887502.732520:DEBUG:Query: CREATE TABLE IF NOT EXISTS ywsdb.tn1 (
primary_date_field Date default today(),
id Int64,
name Nullable(String)
) ENGINE = MergeTree(primary_date_field, (id), 8192)

2019-04-10 17:11:42,732/1554887502.732777:DEBUG:Block send time: 0.000121
2019-04-10 17:11:42,733/1554887502.733407:DEBUG:CSVWriteConverter init()
2019-04-10 17:11:42,733/1554887502.733489:DEBUG:Converter init()
2019-04-10 17:11:42,733/1554887502.733537:DEBUG:[]
2019-04-10 17:11:42,733/1554887502.733587:INFO:PoolWriter()
2019-04-10 17:11:42,733/1554887502.733645:INFO:CHClient() connection_settings={'host': '10.9.168.55', 'user': 'root', 'password': 'kPI+n5Z/', 'port': 9000}
2019-04-10 17:11:42,734/1554887502.734069:DEBUG:{'ywsdb': {'tn1'}} group tables for explicitly specified db/tables
2019-04-10 17:11:42,734/1554887502.734303:DEBUG:{} group tables for prefix specified db/tables
2019-04-10 17:11:42,734/1554887502.734351:INFO:List for migration:
2019-04-10 17:11:42,734/1554887502.734393:INFO: ywsdb.tn1
2019-04-10 17:11:42,734/1554887502.734434:INFO:Start migration ywsdb.tn1
2019-04-10 17:11:42,743/1554887502.743293:DEBUG:Connect to the database host=10.9.168.55 port=3308 user=root password=root db=ywsdb
2019-04-10 17:11:42,743/1554887502.743454:INFO:migrate_table. sql=SELECT * FROM ywsdb.tn1
2019-04-10 17:11:42,744/1554887502.744490:DEBUG:class:<class 'clickhouse_mysql.writer.poolwriter.PoolWriter'> insert
2019-04-10 17:11:42,744/1554887502.744743:CRITICAL:'TableMigrator' object has no attribute 'host'

Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/tablemigrator.py", line 194, in migrate_one_table_data
self.chwriter.insert(event)
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/writer/poolwriter.py", line 38, in insert
self.pool.insert(event_or_events)
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/pool/bbpool.py", line 77, in insert
self.rotate_belt(belt_index)
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/pool/bbpool.py", line 107, in rotate_belt
elif len(self.belts[belt_index][0]) >= self.max_bucket_size:
TypeError: unorderable types: int() >= str()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/main.py", line 140, in run
migrator.migrate_all_tables_data()
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/tablemigrator.py", line 160, in migrate_all_tables_data
self.migrate_one_table_data(db=db, table=table)
File "/usr/lib/python3.4/site-packages/clickhouse_mysql/tablemigrator.py", line 200, in migrate_one_table_data
self.host,
AttributeError: 'TableMigrator' object has no attribute 'host'

'TableMigrator' object has no attribute 'host'

When I delete the option "--migrate-table", it works.

Data Discrepancy Between MySQL And ClickHouse?

Greetings!

Using clickhouse-mysql-data-reader. The counts between the two are different. What could possibly be best settings for migrating and pumping new data from MySQL to ClickHouse, if the source MySQL table is constantly being written to and read from? Difference is huge. like tens of thousands of rows from MySQL constantly missing from ClickHouse. Have to make it pump data faster. Any ideas how?

Regards!

ImportError: cannot import name 'charset_to_encoding'

Hello,

use clickhouse-mysql for migrate data from mysql to clickhouse.
When we try to run migration receive errors:

ImportError: cannot import name 'charset_to_encoding'

Step to reproduce.
In mysql:

1 create database test
2 CREATE TABLE test.testCHMysql ( test1 varchar(32) NOT NULL, test2 int(11) DEFAULT NULL);

In clickhouse:

4 create database test
5 CREATE TABLE test.testCHMysq (test1 String, test2 Int32 ) ENGINE = Distributed('{cluster}','test','testCHMysq',cityHash64( test1, test2 ));
6 CREATE TABLE test.testCHMysq ON CLUSTER test (test1 String, test2 Int32 ) ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{node}/test/testCHMysq', '{replica}') PARTITION BY test1 ORDER BY (test1, test2) ;

7 Start migration from mysql to clickhouse
clickhouse-mysql --src-server-id=1 --src-wait --src-resume --src-host=mysql --nice-pause=1 --src-tables=test.testCHMysql --src-user=user --src-password=password --dst-host=cclickhouse-server --dst-schema=test --dst-table=testCHMysql --dst-user=user --dst-password=password

2020-08-21_accaac


tried to install:

  1. yum install mariadb-devel gcc python3-devel
  2. pip3 install clickhouse-mysql

and tried this way:

  1. pip3 install mysqlclient
  2. pip3 install mysql-replication
  3. pip3 install clickhouse-driver
  4. pip3 install clickhouse-mysql

python version 3.6.8
CentOS Linux release 7.6.1810 (Core)

Does not rotate logs?

Using logrotate on Ubuntu this could be achieved. I noticed clickhouse-mysql does not do it by itself.

Create file /etc/logrotate.d/clickhouse-mysql and add

/path/to//your/log/from/config { 
    daily
    create 0644 root root
    rotate 7
    compress
    postrotate
        service clickhouse-mysql restart
    endscript
}

Make sure to record MySQL binlog position in a file on disk, so it can resume from where it was.
By default Ubuntu runs logrotate every day at 6:25 AM.
Cheers!

how to migrate multiple mysql table to one clickhouse

awesome oss! There is a question: due to some businiess reasons, we have to migrate two or three tables from mysql to one table. The mappings between fields of these tables (actually is from two mysql tables to the target clickhouse table) are already known.

I want to know whether clickhouse-mysql-data-reader project is helpful for this situation?

when I insert a row in mysql ,I can not search it on clickhouse

the clickhouse catch the insert event ,but failed to insert into clickhouse

2019-01-18 18:01:45,726/1547805705.726506:DEBUG:starting tail -n +2 'repl_d_all_b_1547805705.7249274_70f83868-9d29-46d8-8f21-8338d1fb91d0.csv' | clickhouse-client  --host=127.0.0.1 --port=9000 --user=default --query='INSERT INTO `test`.`legend_order_info` (`buy_time`, `car_alias`, `car_brand`, `car_brand_id`, `car_color`, `car_company`, `car_license`, `car_models`, `car_models_id`, `car_power`, `car_power_id`, `car_series`, `car_series_id`, `car_year`, `car_year_id`, `channel_id`, `channel_name`, `company`, `confirm_time`, `contact_mobile`, `contact_name`, `create_time`, `creator`, `customer_address`, `customer_car_id`, `customer_id`, `customer_mobile`, `customer_name`, `discount`, `down_payment`, `engine_no`, `expect_time`, `expected_time`, `fee_amount`, `fee_discount`, `finish_time`, `gmt_create`, `gmt_modified`, `goods_amount`, `goods_count`, `goods_discount`, `id`, `img_url`, `import_info`, `insurance_company_id`, `insurance_company_name`, `invoice_type`, `is_deleted`, `is_lock`, `is_notice`, `is_visit`, `mileage`, `modifier`, `oil_meter`, `operator_name`, `order_amount`, `order_discount_amount`, `order_sn`, `order_status`, `order_tag`, `order_type`, `other_insurance_company_id`, `other_insurance_company_name`, `parent_id`, `pay_amount`, `pay_status`, `pay_time`, `postscript`, `pre_coupon_amount`, `pre_discount_rate`, `pre_preferentia_amount`, `pre_tax_amount`, `pre_total_amount`, `proxy_type`, `receiver`, `receiver_name`, `refer`, `service_amount`, `service_count`, `service_discount`, `shop_id`, `sign_amount`, `tax_amount`, `total_amount`, `upkeep_mileage`, `ver`, `vin`) FORMAT CSV'
Code: 27. DB::Exception: Cannot parse input: expected , before: foxhch,,,,2019-01-20,0,,0,0,,,,10.13,,,,0.00,0.00,,,,0.00,0,0.00,123945,,,0,,0,N,N,0,0,,0,,,0.00,0.00,,haha,20,0,0,,0,0.00,0,,,0.00,1.00000000,0.00,0.00,0.00,14: (at row 1)

Row 1:
Column 0,   name: buy_time,                     type: Nullable(DateTime),       parsed text: ",,,0,,,,,0,,0,,0,,0"
Column 1,   name: car_alias,                    type: Nullable(String),         parsed text: "13"
Column 2,   name: car_brand,                    type: Nullable(String),         parsed text: "mychannel"
Column 3,   name: car_brand_id,                 type: Nullable(Int32),          parsed text: <EMPTY>
ERROR: garbage after Nullable(Int32): "foxhch,,,,"

# in mysql

root@localhost:test 06:01:21>insert into legend_order_info (id,shop_id,order_status,order_tag,refer,create_time,proxy_type,channel_id,channel_name,down_payment,company) values(123945,123239,'haha',20,11,'2019-01-20',14,13,'mychannel',10.13,'foxhch');
Query OK, 1 row affected (0.01 sec)

# the ddl on clickhouse

t-k8s-a1 :) show create table legend_order_info\G

SHOW CREATE TABLE legend_order_info

Row 1:
──────
statement: CREATE TABLE test.legend_order_info ( id Int32,  creator Nullable(Int32),  modifier Nullable(Int32),  gmt_create Nullable(DateTime),  pay_time Nullable(DateTime),  gmt_modified Nullable(DateTime),  is_deleted Nullable(String),  shop_id Int32,  order_type Nullable(Int32),  parent_id Nullable(Int32),  order_sn Nullable(String),  customer_id Nullable(Int32),  customer_car_id Nullable(Int32),  expected_time Nullable(DateTime),  goods_amount Nullable(Decimal(12, 2)),  service_amount Nullable(Decimal(12, 2)),  tax_amount Nullable(Decimal(12, 2)),  total_amount Nullable(Decimal(12, 2)),  postscript Nullable(String),  order_status String,  car_license Nullable(String),  car_brand_id Nullable(Int32),  car_series_id Nullable(Int32),  car_power_id Nullable(Int32),  car_year_id Nullable(Int32),  car_models_id Nullable(Int32),  car_brand Nullable(String),  car_series Nullable(String),  car_power Nullable(String),  car_year Nullable(String),  car_models Nullable(String),  car_company Nullable(String),  import_info Nullable(String),  customer_name Nullable(String),  customer_mobile Nullable(String),  vin Nullable(String),  engine_no Nullable(String),  receiver Nullable(Int32),  invoice_type Nullable(Int32),  discount Nullable(Decimal(12, 2)),  receiver_name Nullable(String),  operator_name Nullable(String),  order_amount Nullable(Decimal(12, 2)),  pay_status Nullable(UInt8),  finish_time Nullable(DateTime),  goods_count Nullable(Int32),  service_count Nullable(Int32),  car_alias Nullable(String),  pre_discount_rate Nullable(Decimal(12, 2)),  pre_tax_amount Nullable(Decimal(12, 2)),  pre_preferentia_amount Nullable(Decimal(12, 2)),  pre_coupon_amount Nullable(Decimal(12, 2)),  pre_total_amount Nullable(Decimal(12, 2)),  pay_amount Nullable(Decimal(12, 2)),  sign_amount Nullable(Decimal(12, 2)),  other_insurance_company_id Nullable(Int32),  other_insurance_company_name Nullable(String),  goods_discount Nullable(Decimal(12, 2)),  service_discount Nullable(Decimal(12, 2)),  fee_amount Nullable(Decimal(12, 2)),  fee_discount Nullable(Decimal(12, 2)),  contact_name Nullable(String),  contact_mobile Nullable(String),  insurance_company_id Nullable(Int32),  insurance_company_name Nullable(String),  mileage Nullable(String),  car_color Nullable(String),  buy_time Nullable(DateTime),  customer_address Nullable(String),  oil_meter Nullable(String),  is_notice Nullable(Int8),  is_visit Nullable(Int32),  is_lock Nullable(String),  img_url Nullable(String),  order_tag UInt8,  ver Nullable(String),  refer UInt8,  upkeep_mileage Nullable(String),  create_time Date,  order_discount_amount Nullable(Decimal(16, 2)),  confirm_time Nullable(DateTime),  proxy_type UInt8,  channel_id UInt32,  channel_name String,  expect_time Nullable(DateTime),  down_payment Decimal(12, 2),  company Nullable(String)) ENGINE = MergeTree(create_time, (id, create_time), 8192)

1 rows in set. Elapsed: 0.002 sec.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.