Coder Social home page Coder Social logo

zongji's Introduction

ZongJi Build Status

A MySQL binlog listener running on Node.js.

ZongJi (踪迹) is pronounced as zōng jì in Chinese.

This package is a pure JS implementation based on mysql. It has been tested to work in MySQL 5.5, 5.6, and 5.7.

Latest Release

The latest release is v0.5.0, only supports Node.js from v8.

v0.4.7 is the last release which supports Node.js v4.x.

Quick Start

let zongji = new ZongJi({ /* ... MySQL Connection Settings ... */ });

// Each change to the replication log results in an event
zongji.on('binlog', function(evt) {
  evt.dump();
});

// Binlog must be started, optionally pass in filters
zongji.start({
  includeEvents: ['tablemap', 'writerows', 'updaterows', 'deleterows']
});

For a complete implementation see example.js...

Installation

  • Requires Node.js v8+

    $ npm install zongji
  • Enable MySQL binlog in my.cnf, restart MySQL server after making the changes.

    From MySQL 5.6, binlog checksum is enabled by default. Zongji can work with it, but it doesn't really verify it.

    # Must be unique integer from 1-2^32
    server-id        = 1
    # Row format required for ZongJi
    binlog_format    = row
    # Directory must exist. This path works for Linux. Other OS may require
    #   different path.
    log_bin          = /var/log/mysql/mysql-bin.log
    
    binlog_do_db     = employees   # Optional, limit which databases to log
    expire_logs_days = 10          # Optional, purge old logs
    max_binlog_size  = 100M        # Optional, limit log size
    
  • Create an account with replication privileges, e.g. given privileges to account zongji (or any account that you use to read binary logs)

    GRANT REPLICATION SLAVE, REPLICATION CLIENT, SELECT ON *.* TO 'zongji'@'localhost'

ZongJi Class

The ZongJi constructor accepts one argument of either:

  • An object containing MySQL connection details in the same format as used by package mysql
  • Or, a mysql Connection or Pool object that will be used for querying column information.

If a Connection or Pool object is passed to the constructor, it will not be destroyed/ended by Zongji's stop() method.

If there is a dateStrings mysql configuration option in the connection details or connection, ZongJi will follow it.

Each instance includes the following methods:

Method Name Arguments Description
start options Start receiving replication events, see options listed below
stop None Disconnect from MySQL server, stop receiving events
on eventName, handler Add a listener to the binlog or error event. Each handler function accepts one argument.

Some events can be emitted in different phases:

Event Name Description
ready This event is occurred right after ZongJi successfully established a connection, setup slave status, and set binlog position.
binlog Once a binlog is received and passes the filter, it will bubble up with this event.
error Every error will be caught by this event.
stopped Emitted when ZongJi connection is stopped (ZongJi#stop is called).

Options available:

Option Name Type Description
serverId integer Unique number (1 - 232) to identify this replication slave instance. Must be specified if running more than one instance of ZongJi. Must be used in start() method for effect.
Default: 1
startAtEnd boolean Pass true to only emit binlog events that occur after ZongJi's instantiation. Must be used in start() method for effect.
Default: false
filename string Begin reading events from this binlog file. If specified together with position, will take precedence over startAtEnd.
position integer Begin reading events from this position. Must be included with filename.
includeEvents [string] Array of event names to include
Example: ['writerows', 'updaterows', 'deleterows']
excludeEvents [string] Array of event names to exclude
Example: ['rotate', 'tablemap']
includeSchema object Object describing which databases and tables to include (Only for row events). Use database names as the key and pass an array of table names or true (for the entire database).
Example: { 'my_database': ['allow_table', 'another_table'], 'another_db': true }
excludeSchema object Object describing which databases and tables to exclude (Same format as includeSchema)
Example: { 'other_db': ['disallowed_table'], 'ex_db': true }
  • By default, all events and schema are emitted.
  • excludeSchema and excludeEvents take precedence over includeSchema and includeEvents, respectively.

Supported Binlog Events:

Event name Description
unknown Catch any other events
query Insert/Update/Delete Query
intvar Autoincrement and LAST_INSERT_ID
rotate New Binlog file Not required to be included to rotate to new files, but it is required to be included in order to keep the filename and position properties updated with current values for graceful restarting on errors.
format Format Description
xid Transaction ID
tablemap Before any row event (must be included for any other row events)
writerows Rows inserted, row data array available as rows property on event object
updaterows Rows changed, row data array available as rows property on event object
deleterows Rows deleted, row data array available as rows property on event object

Event Methods

Neither method requires any arguments.

Name Description
dump Log a description of the event to the console
getEventName Return the name of the event

Important Notes

  • 🌟 All types allowed by mysql are supported by this package.
  • 🙊 64-bit integer is supported via package big-integer(see #108). If an integer is within the safe range of JS number (-2^53, 2^53), a Number object will returned, otherwise, will return as String.
  • 👉 TRUNCATE statement does not cause corresponding DeleteRows event. Use unqualified DELETE FROM for same effect.
  • When using fractional seconds with DATETIME and TIMESTAMP data types in MySQL > 5.6.4, only millisecond precision is available due to the limit of Javascript's Date object.

Run Tests

  • install Docker
  • run docker-compose up and then ./docker-test.sh

Reference

I learnt many things from following resources while making ZongJi.

License

MIT

zongji's People

Contributors

dazzyon avatar dependabot[bot] avatar jefbarn avatar jmealo avatar joyqi avatar madeuz avatar mrichards42 avatar nevill avatar normanrz avatar numtel avatar prdn avatar rfanth avatar vlahupetar avatar xfg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zongji's Issues

有个报错一直解决不了

监听的是阿里云的某一个从库

index-0 (err):     at Object.exports._errnoException (util.js:870:11)
index-0 (err):     at exports._exceptionWithHostPort (util.js:893:20)
index-0 (err):     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1061:14)
index-0 (err):     --------------------
index-0 (err):     at Protocol._enqueue (/opt/bmqb_account_stat/node_modules/mysql/lib/protocol/Protocol.js:141:48)
index-0 (err):     at Protocol.handshake (/opt/bmqb_account_stat/node_modules/mysql/lib/protocol/Protocol.js:52:41)
index-0 (err):     at Connection.connect (/opt/bmqb_account_stat/node_modules/mysql/lib/Connection.js:123:18)
index-0 (err):     at new ZongJi (/opt/bmqb_account_stat/node_modules/zongji/index.js:18:23)
index-0 (err):     at Object.<anonymous> (/opt/bmqb_account_stat/src/commands/mysql.js:9:22)
index-0 (err):     at [object Generator].next (native)
index-0 (err):     at step (/opt/bmqb_account_stat/src/commands/mysql.js:3:1)
index-0 (err):     at /opt/bmqb_account_stat/src/commands/mysql.js:3:1
index-0 (err):     at Object.<anonymous> (/opt/bmqb_account_stat/src/commands/mysql.js:3:1)
index-0 (err):     at Object.handler (/opt/bmqb_account_stat/src/commands/mysql.js:7:5)
index-0 (err):     at /opt/bmqb_account_stat/src/index.js:18:31
index-0 (err):     at [object Generator].next (native)
index-0 (err):     at step (/opt/bmqb_account_stat/src/index.js:1:1)
index-0 (err):     at /opt/bmqb_account_stat/src/index.js:1:1
index-0 (err):     at /opt/bmqb_account_stat/src/index.js:1:1
index-0 (err):     at Object.<anonymous> (/opt/bmqb_account_stat/src/index.js:16:1)

Timezone issues

Due to the way Node.js handles dates, if the server running zongji is in a different timezone than the MySQL server, you will run into issues with the times being off. If everything is running UTC you should be ok.

Trying to use the TZ variable and various ways to force Node to use a particular timezone are prone to issues. It looks like robust timezone support could be provided by parsing the timezone from the binlog: https://dev.mysql.com/doc/internals/en/query-event.html

Then you can adjust accordingly (Date has getTimezoneOffset() which helps too).

My fork already has a workaround for this that works on OSX that uses the control connection to determine the timezone offset but it failed on Travis so it requires further testing and is prone to issues if the timezone changes while zongi is running (this happens twice a year anywhere that has daylight savings time, so it's not an edge case).

Unique identifier for binlog rows (question?)

Is there a way to get a unique identifier from binlogs for each log, something like uuid for example?
I am sending logs to a kinesis stream and there is a consumer which reads the data to save into db. If there is a unique identifier for each log, then reprocessing the records wouldn't create duplicate data on the final destination. I thought about using nextPosition and timestamp but realized non of them are unique actually. Any idea how to get such an information?
Thanks

Zongji skipping through large binlog files on restart

I have a large MySQL binlog file (FILESIZE: 328035862 Bytes) which is being read and processed through a zongji application. Also, I am storing the binlog position to which events have been processed in an external memory. During this, a stage comes when my zongji application crashes as it goes out to heap memory because of a large number of unprocessed binlog events. When I restart the zongji application using the last processed binlogName and binlogNextPos, zongji starts reading the binlog events somewhere at the end of passed binlogName file skipping over the file.

Any ideas why this might be happening and how can I prevent this?

eg.
last processed event before zongji application crash : (mysql-bin-changelog.011531, 911301)
After restarting the application with the above binlogName and binlogNextPos, next event processed : (mysql-bin-changelog.011531,329171477)

Support for SQL style string datetimes and timestamps with {dateStrings: true}.

This package generally works great, but I have run in to a significant problem with the limitations of the JavaScript Date objects used by the package as output for SQL DateTime and TimeStamp values.

The JavaScript Date format cannot represent anything before 1970, even though the SQL Date and DateTime can. Also, TimeStamps can have the value of 0000-00-00 00:00:00. Round tripping in a database application is severely hampered by the fact that the JavaScript Date cannot hold most of the values that MySQL can represent.

The mysql package solved the problem by having a dateStrings: true option, and I think it can be solved similarly here. I already wrote some code to fix it, but I want your opinion on the best way to pass a dateStrings option through to common.js where it is best replaced.

I fixed it by replacing the new Date(...) calls in common.js with calls to two new functions, that can either call new Date(...) and return it, mimicking the old behavior, or build a string out of the value, like one that an sql select statement would return, or that the node mysql package would return if dateStrings: true is set. It works great, and seems to entirely fix the problem, but there is still the matter of relaying the dateStrings option to exports.readMysqlValue in common.js.

Do you have a preferred way to relay dsn.dateStrings (or perhaps options.dateStrings if you prefer) to readMysqlValue in the common.js file? It could be done as a function argument, with a setter in the file, or some other way.

Thanks,

R.F.

Release last version [0.4.5]

Hi @nevill. I'm very curios of using library with connection pool. Unfortunately npm /yarn repos have 0.4.4 as latest.
Can you release last version?

Do not event using Amazon RDS

Hello,

I have one Amazon RDS instace running mysql 5.7.11, and I tought is everything configured, so I used this example to read the log and nothing happens and I have for sure the binary log is being written.

var ZongJi = require('zongji');
var zongji = new ZongJi(
  {
    host     : '*******',
    user     : '******',
    password : '******',
    // debug: true
  });

// Each change to the replication log results in an event
zongji.on('binlog', function(evt) {
  evt.dump();
});

zongji.on('error', function(err) {
          console.log("ZongJi error event", err);
        });

// Binlog must be started, optionally pass in filters
zongji.start({
  includeEvents: ['tablemap', 'writerows', 'updaterows', 'deleterows']
});

process.on('SIGINT', function() {
  console.log('Got SIGINT.');
  zongji.stop();
  process.exit();
});

The debug returned this:

<-- HandshakeInitializationPacket
HandshakeInitializationPacket {
  protocolVersion: 10,
  serverVersion: '5.7.11-log',
  threadId: 178,
  scrambleBuff1: <Buffer 2d 0c 0d 26 32 38 68 75>,
  filler1: <Buffer 00>,
  serverCapabilities1: 65535,
  serverLanguage: 8,
  serverStatus: 2,
  serverCapabilities2: 49663,
  scrambleLength: 21,
  filler2: <Buffer 00 00 00 00 00 00 00 00 00 00>,
  scrambleBuff2: <Buffer 21 0a 2b 15 5b 2a 1e 72 65 78 61 1e>,
  filler3: <Buffer 00>,
  pluginData: 'mysql_native_password',
  protocol41: true }

--> ClientAuthenticationPacket
ClientAuthenticationPacket {
  clientFlags: 455631,
  maxPacketSize: 0,
  charsetNumber: 33,
  filler: undefined,
  user: 'tracker',
  scrambleBuff: <Buffer b6 f2 b3 32 a5 11 17 4c 28 e2 ed cc ad 92 05 8b d7 bc 87 3e>,
  database: 'information_schema',
  protocol41: true }

<-- OkPacket
OkPacket {
  fieldCount: 0,
  affectedRows: 0,
  insertId: 0,
  serverStatus: 2,
  warningCount: 0,
  message: '',
  protocol41: true,
  changedRows: 0 }

--> ComQueryPacket
ComQueryPacket {
  command: 3,
  sql: 'select @@GLOBAL.binlog_checksum as checksum' }

<-- ResultSetHeaderPacket
ResultSetHeaderPacket { fieldCount: 1, extra: undefined }

<-- FieldPacket
FieldPacket {
  catalog: 'def',
  db: '',
  table: '',
  orgTable: '',
  name: 'checksum',
  orgName: '',
  charsetNr: 33,
  length: 15,
  type: 253,
  flags: 0,
  decimals: 31,
  default: undefined,
  zeroFill: false,
  protocol41: true }

<-- EofPacket
EofPacket {
  fieldCount: 254,
  warningCount: 0,
  serverStatus: 2,
  protocol41: true }

<-- RowDataPacket
RowDataPacket { checksum: 'CRC32' }

<-- EofPacket
EofPacket {
  fieldCount: 254,
  warningCount: 0,
  serverStatus: 2,
  protocol41: true }

<-- HandshakeInitializationPacket
HandshakeInitializationPacket {
  protocolVersion: 10,
  serverVersion: '5.7.11-log',
  threadId: 179,
  scrambleBuff1: <Buffer 45 05 37 69 1c 73 3c 70>,
  filler1: <Buffer 00>,
  serverCapabilities1: 65535,
  serverLanguage: 8,
  serverStatus: 2,
  serverCapabilities2: 49663,
  scrambleLength: 21,
  filler2: <Buffer 00 00 00 00 00 00 00 00 00 00>,
  scrambleBuff2: <Buffer 1d 6b 1d 79 59 46 5a 16 10 1c 58 28>,
  filler3: <Buffer 00>,
  pluginData: 'mysql_native_password',
  protocol41: true }

--> ClientAuthenticationPacket
ClientAuthenticationPacket {
  clientFlags: 455631,
  maxPacketSize: 0,
  charsetNumber: 33,
  filler: undefined,
  user: 'tracker',
  scrambleBuff: <Buffer 42 ae a8 43 5b 8e 18 8d f2 d0 df 6b 6e ee 6e 3d c8 a4 3f f5>,
  database: undefined,
  protocol41: true }

<-- OkPacket
OkPacket {
  fieldCount: 0,
  affectedRows: 0,
  insertId: 0,
  serverStatus: 2,
  warningCount: 0,
  message: '',
  protocol41: true,
  changedRows: 0 }

--> ComQueryPacket
ComQueryPacket {
  command: 3,
  sql: 'set @master_binlog_checksum=@@global.binlog_checksum' }

<-- OkPacket
OkPacket {
  fieldCount: 0,
  affectedRows: 0,
  insertId: 0,
  serverStatus: 2,
  warningCount: 0,
  message: '',
  protocol41: true,
  changedRows: 0 }

--> ComQueryPacket
ComQueryPacket { command: 3, sql: 'SHOW BINARY LOGS' }

<-- ResultSetHeaderPacket
ResultSetHeaderPacket { fieldCount: 2, extra: undefined }

<-- FieldPacket
FieldPacket {
  catalog: 'def',
  db: '',
  table: '',
  orgTable: '',
  name: 'Log_name',
  orgName: '',
  charsetNr: 33,
  length: 765,
  type: 253,
  flags: 1,
  decimals: 31,
  default: undefined,
  zeroFill: false,
  protocol41: true }

<-- FieldPacket
FieldPacket {
  catalog: 'def',
  db: '',
  table: '',
  orgTable: '',
  name: 'File_size',
  orgName: '',
  charsetNr: 63,
  length: 20,
  type: 8,
  flags: 161,
  decimals: 0,
  default: undefined,
  zeroFill: false,
  protocol41: true }

<-- EofPacket
EofPacket {
  fieldCount: 254,
  warningCount: 0,
  serverStatus: 2,
  protocol41: true }

<-- RowDataPacket
RowDataPacket { Log_name: 'mysql-bin-changelog.000290', File_size: 30632 }

<-- RowDataPacket
RowDataPacket { Log_name: 'mysql-bin-changelog.000291', File_size: 37237 }

<-- EofPacket
EofPacket {
  fieldCount: 254,
  warningCount: 0,
  serverStatus: 2,
  protocol41: true }

--> ComBinlog
ComBinlog { command: 18, position: 4, flags: 0, serverId: 1, filename: '' }

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

<-- BinlogHeader
BinlogHeader {}

Got SIGINT.
--> ComQueryPacket
ComQueryPacket { command: 3, sql: 'KILL 179' }

Output format - JSON?

so when I do evt.dump, I get output that looks like this:

Column: CLOSEOUT_DATE, Value: null => null
Column: TRANSACTION_TYPE_CODE, Value: 9 => 9
Column: NOTICE_DATE, Value: Mon Mar 18 1985 00:00:00 GMT-0700 (MST) => Mon Mar 18 1985 00:00:00 GMT-0700 (MST)
Column: LEAD_UNIT_NUMBER, Value: DPFRWS => DPFRWS
Column: ACTIVITY_TYPE_CODE, Value: 6 => 5
Column: AWARD_TYPE_CODE, Value: 5 => 5
Column: PRIME_SPONSOR_CODE, Value: null => null

Do I just need to parse it from that, or can it be output to JSON? I am wanting to get it to JSON, and also as a SQL statement. It seems weird to have to parse this text as it is (basically comma delimited between column and value, and arrow to show what value changed to what).

Cannot read property 'COLUMN_NAME' of undefined

=== TableMap ===
Date: Thu Jun 25 2015 05:38:05 GMT+0800 (CST)
Next log position: 72907
Event size: 39
Table id: 156
Schema: CCLC_PD
Table: autotask
Columns: 6
Column types: [ 8, 1, 8, 17, 1, 15 ]
undefined
Next position: 72907
15:15 CCLC_PD.autotask:tablemap
=== UpdateRows ===
Date: Thu Jun 25 2015 05:38:05 GMT+0800 (CST)
Next log position: 72989
Event size: 59
Affected columns: 6
Changed rows: 1

Values:

Column: id, Value: 9561 => 9561
Column: type, Value: 73 => 73
Column: recid, Value: 735 => 735
Column: execdate, Value: Thu Jun 25 2015 05:38:04 GMT+0800 (CST) => Thu Jun 25 2015 05:39:05 GMT+0800 (CST)
Column: state, Value: 33 => 2
Column: err, Value: null => null
undefined
Next position: 72989
15:15 CCLC_PD.autotask:updaterows
{ timestamp: 1435181925000,
nextPosition: 73661,
size: 61,
tableMap:
{ '156':
{ columnSchemas: [Object],
parentSchema: 'CCLC_PD',
tableName: 'autotask',
columns: [Object] },
'160':
{ columnSchemas: [Object],
parentSchema: 'CCLC_PD',
tableName: 'autoinvest_quene',
columns: [Object] },
'186':
{ columnSchemas: [Object],
parentSchema: 'CCLC_PD',
tableName: 'withdraw' } },
tableId: 186,
flags: 1,
schemaName: 'CCLC_PD',
tableName: 'withdraw',
columnCount: 15,
columnTypes: [ 8, 8, 8, 246, 1, 15, 15, 17, 17, 246, 1, 3, 17, 246, 15 ],
columnsMetadata:
[ undefined,
undefined,
undefined,
{ precision: 10, decimals: 2 },
undefined,
{ max_length: 900 },
{ max_length: 900 },
{ decimals: 0 },
{ decimals: 0 },
{ precision: 6, decimals: 2 },
undefined,
undefined,
{ decimals: 0 },
{ precision: 10, decimals: 2 },
{ max_length: 600 } ] }
/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:82
throw err;
^
TypeError: Cannot read property 'COLUMN_NAME' of undefined
at TableMap.updateColumnInfo (/data/cclc/cclc-node2/node_modules/zongji/lib/binlog_event.js:166:29)
at /data/cclc/cclc-node2/node_modules/zongji/index.js:185:19
at Query._callback (/data/cclc/cclc-node2/node_modules/zongji/index.js:159:5)
at Query.Sequence.end (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
at Query._handleFinalResultPacket (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:143:8)
at Query.EofPacket (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:127:8)
at Protocol._parsePacket (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:271:23)
at Parser.write (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:77:12)
at Protocol.write (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket. (/data/cclc/cclc-node2/node_modules/zongji/node_modules/mysql/lib/Connection.js:82:28)

Running on Amazon RDS for MySQL

Will this work on RDS? I am trying to make it work, but it complains that my user doesn't have replication client privilege, and I don't think you can grant a user that role on RDS.

[question] why did I follow the installation but report error?

Dear @nevill

I am using mac with mysql 5.6. When I finish the second step -- modifying my.cnf.
Error Reporting:

Starting MySQL
. ERROR! The server quit without updating PID file (/usr/local/mysql/data/NiTu.pid).

when I delete these new lines, Mysql works again.

How to fix it?

By the way, I am trying to understand meteor-mysql for developing the meteor-orientdb and I opened an issue here. orientechnologies/orientdb#3602

Wish you can do me a favor. Do we have to reproduce zongji for OrientDB?

Thank you very much!
新春快乐!

Make it possible to restart ZongJi on error

In #41 (comment), I have suggested a method of making it possible to recover from errors and disconnections.

  • Make it possible to determine the current binlog filename
    • Currently, findBinlogEnd only determines the filename if the startAtEnd option is true. Otherwise, that value is an empty string. Either way, this value is not externally available the way the code is right now.
    • Binlog position is available as a property of each event. It seems like the optimal patch would simply add a property for the current binlog file name.
  • Make it possible to start reading binlog events from a specific filename and position

I'm not sure how much time I'll get to work on this soon so if anybody wants to help look into what it will take to do this, that would be awesome.

Will zongji support MariaDB?

Hi Team,

I am using MariaDB which comes by default from CentOS 7. Zongji not reading Events but it is working fine in another server which is having MySQL.

Thanks

Working on node 4.3?

When I do npm install, I see:

WARN engine [email protected]: wanted: {"node":"0.10"} (current: {"node":"4.3.1","npm":"2.14.12"})

Will zongji work on node 4.3? Will it be upgraded soon?

Enhance to support DDL changes?

I don't see that zongji supports DDL events - is that right? How hard would it be to enhance it so it can send alter statements?

Multiple constraints on a single column breaks parsing

If a column has multiple constraints the parser will skip/offset columns by n-1 where n is the number of constraints for a given column.

Insert a row:

INSERT INTO test_table (first_name, last_name, sex, dob, nullable) VALUES('Frank', 'Lapidus', 'M','1952-11-29', 'not null');

Bad behavior:

{
   id: 'Frank', // This should be the first_name, so the columns are offset by 1
   first_name: 'Lapidus',
   last_name: 'ar(200', // this comes from the table definition varchar(200)
   sex: Sat Nov 29 1952 00:00:00 GMT-0500 (EST),
   dob: 'not null' 
}

Table definition with bad behavior:"

CREATE TABLE IF NOT EXISTS `test_table` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `first_name` varchar(100) NOT NULL,
  `last_name` varchar(200) NOT NULL,
  `sex` enum('M','F') DEFAULT NULL,
  `dob` datetime DEFAULT NULL,
  `nullable` varchar(300) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

TableMap (unexpected behavior):

TableMap {
  timestamp: 1455855014000,
  nextPosition: 171939,
  size: 46,
  tableMap: 
   { '111': 
      { columnSchemas: [Object],
        parentSchema: 'sawyer',
        tableName: 'test_table',
        columns: [Object] } },
  tableId: 111,
  flags: 1,
  schemaName: 'sawyer',
  tableName: 'test_table',
  columnCount: 6,
  columnTypes: [ 3, 15, 15, 247, 18, 15 ],
  columnsMetadata: 
   [ undefined,
     { max_length: 300 },
     { max_length: 600 },
     { size: 1 },
     { decimals: 0 },
     { max_length: 900 } ] }
{
        "columnSchemas": [
                {
                        "COLUMN_NAME": "id",
                        "COLLATION_NAME": null,
                        "CHARACTER_SET_NAME": null,
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "int(11)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": "PRIMARY KEY",
                        "ORDINAL_POSITION": 1,
                        "CONSTRAINT_NAME": "PRIMARY"
                },
                {
                        "COLUMN_NAME": "id",
                        "COLLATION_NAME": null,
                        "CHARACTER_SET_NAME": null,
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "int(11)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": "UNIQUE",
                        "ORDINAL_POSITION": 1,
                        "CONSTRAINT_NAME": "id_UNIQUE"
                },
                {
                        "COLUMN_NAME": "first_name",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(100)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "last_name",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(200)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "sex",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "enum('M','F')",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "dob",
                        "COLLATION_NAME": null,
                        "CHARACTER_SET_NAME": null,
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "datetime",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "nullable",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(300)",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                }
        ],
        "parentSchema": "sawyer",
        "tableName": "test_table",
        "columns": [
                {
                        "name": "id",
                        "charset": null,
                        "type": 3,
                        "nullable": false,
                        "constraint": {
                                "type": "PRIMARY KEY",
                                "position": 1,
                                "name": "PRIMARY"
                        }
                },
                {
                        "name": "id",
                        "charset": null,
                        "type": 15,
                        "nullable": false,
                        "metadata": {
                                "max_length": 300
                        },
                        "constraint": {
                                "type": "UNIQUE",
                                "position": 1,
                                "name": "id_UNIQUE"
                        }
                },
                {
                        "name": "first_name",
                        "charset": "utf8",
                        "type": 15,
                        "nullable": false,
                        "metadata": {
                                "max_length": 600
                        }
                },
                {
                        "name": "last_name",
                        "charset": "utf8",
                        "type": 247,
                        "nullable": false,
                        "metadata": {
                                "size": 1
                        }
                },
                {
                        "name": "sex",
                        "charset": "utf8",
                        "type": 18,
                        "nullable": true,
                        "metadata": {
                                "decimals": 0
                        }
                },
                {
                        "name": "dob",
                        "charset": null,
                        "type": 15,
                        "nullable": true,
                        "metadata": {
                                "max_length": 900
                        }
                }
        ],
        "pk": "id",
        "constraints": {
                "PRIMARY": [
                        "id"
                ],
                "id_UNIQUE": [
                        "id"
                ]
        }
}
WriteRows {
  timestamp: 1455855014000,
  nextPosition: 172005,
  size: 47,
  _zongji: 
   ZongJi {
     options: { includeEvents: [Object], startAtEnd: true, serverId: 1 },
     domain: null,
     _events: { error: [Object], binlog: [Function: bound _binLogHandler] },
     _eventsCount: 2,
     _maxListeners: undefined,
     ctrlConnection: 
      Connection {
        domain: null,
        _events: [Object],
        _eventsCount: 3,
        _maxListeners: undefined,
        config: [Object],
        _socket: [Object],
        _protocol: [Object],
        _connectCalled: true,
        state: 'authenticated',
        threadId: 348 },
     ctrlCallbacks: [ [Function] ],
     connection: 
      Connection {
        domain: null,
        _events: {},
        _eventsCount: 0,
        _maxListeners: undefined,
        config: [Object],
        _socket: [Object],
        _protocol: [Object],
        _connectCalled: true,
        state: 'authenticated',
        threadId: 349 },
     tableMap: { '111': [Object] },
     ready: true,
     useChecksum: false,
     binlog: { [Function: Binlog] super_: [Object] } },
  tableId: 111,
  flags: 1,
  useChecksum: false,
  numberOfColumns: 6,
  tableMap: 
   { '111': 
      { columnSchemas: [Object],
        parentSchema: 'sawyer',
        tableName: 'test_table',
        columns: [Object],
        pk: 'id',
        constraints: [Object] } }

Now if we simply define the table without an additional unique constraint, the parser behaves as expected.

Table with correct behavior:"

CREATE TABLE IF NOT EXISTS `test_table` (
  `id` int(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
  `first_name` varchar(100) NOT NULL,
  `last_name` varchar(200) NOT NULL,
  `sex` enum('M','F') DEFAULT NULL,
  `dob` datetime DEFAULT NULL,
  `nullable` varchar(300) DEFAULT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

RowEvent (good behavior):

{  id: 1,
   first_name: 'Frank',
   last_name: 'Lapidus',
   sex: 'M',
   dob: Sat Nov 29 1952 00:00:00 GMT-0500 (EST),
   nullable: 'not null'
}

TableMap (good behavior):

TableMap {
  timestamp: 1455855463000,
  nextPosition: 177310,
  size: 46,
  tableMap: 
   { '113': 
      { columnSchemas: [Object],
        parentSchema: 'sawyer',
        tableName: 'test_table',
        columns: [Object] } },
  tableId: 113,
  flags: 1,
  schemaName: 'sawyer',
  tableName: 'test_table',
  columnCount: 6,
  columnTypes: [ 3, 15, 15, 247, 18, 15 ],
  columnsMetadata: 
   [ undefined,
     { max_length: 300 },
     { max_length: 600 },
     { size: 1 },
     { decimals: 0 },
     { max_length: 900 } ] }
{
        "columnSchemas": [
                {
                        "COLUMN_NAME": "id",
                        "COLLATION_NAME": null,
                        "CHARACTER_SET_NAME": null,
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "int(11)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": "PRIMARY KEY",
                        "ORDINAL_POSITION": 1,
                        "CONSTRAINT_NAME": "PRIMARY"
                },
                {
                        "COLUMN_NAME": "first_name",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(100)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "last_name",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(200)",
                        "IS_NULLABLE": "NO",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "sex",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "enum('M','F')",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "dob",
                        "COLLATION_NAME": null,
                        "CHARACTER_SET_NAME": null,
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "datetime",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                },
                {
                        "COLUMN_NAME": "nullable",
                        "COLLATION_NAME": "utf8_general_ci",
                        "CHARACTER_SET_NAME": "utf8",
                        "COLUMN_COMMENT": "",
                        "COLUMN_TYPE": "varchar(300)",
                        "IS_NULLABLE": "YES",
                        "CONSTRAINT_TYPE": null,
                        "ORDINAL_POSITION": null,
                        "CONSTRAINT_NAME": null
                }
        ],
        "parentSchema": "sawyer",
        "tableName": "test_table",
        "columns": [
                {
                        "name": "id",
                        "charset": null,
                        "type": 3,
                        "nullable": false,
                        "constraint": {
                                "type": "PRIMARY KEY",
                                "position": 1,
                                "name": "PRIMARY"
                        }
                },
                {
                        "name": "first_name",
                        "charset": "utf8",
                        "type": 15,
                        "nullable": false,
                        "metadata": {
                                "max_length": 300
                        }
                },
                {
                        "name": "last_name",
                        "charset": "utf8",
                        "type": 15,
                        "nullable": false,
                        "metadata": {
                                "max_length": 600
                        }
                },
                {
                        "name": "sex",
                        "charset": "utf8",
                        "type": 247,
                        "nullable": true,
                        "metadata": {
                                "size": 1
                        }
                },
                {
                        "name": "dob",
                        "charset": null,
                        "type": 18,
                        "nullable": true,
                        "metadata": {
                                "decimals": 0
                        }
                },
                {
                        "name": "nullable",
                        "charset": "utf8",
                        "type": 15,
                        "nullable": true,
                        "metadata": {
                                "max_length": 900
                        }
                }
        ],
        "pk": "id",
        "constraints": {
                "PRIMARY": [
                        "id"
                ]
        }
}
WriteRows {
  timestamp: 1455855463000,
  nextPosition: 177376,
  size: 47,
  _zongji: 
   ZongJi {
     options: { includeEvents: [Object], startAtEnd: true, serverId: 1 },
     domain: null,
     _events: { error: [Object], binlog: [Function: bound _binLogHandler] },
     _eventsCount: 2,
     _maxListeners: undefined,
     ctrlConnection: 
      Connection {
        domain: null,
        _events: [Object],
        _eventsCount: 3,
        _maxListeners: undefined,
        config: [Object],
        _socket: [Object],
        _protocol: [Object],
        _connectCalled: true,
        state: 'authenticated',
        threadId: 350 },
     ctrlCallbacks: [ [Function] ],
     connection: 
      Connection {
        domain: null,
        _events: {},
        _eventsCount: 0,
        _maxListeners: undefined,
        config: [Object],
        _socket: [Object],
        _protocol: [Object],
        _connectCalled: true,
        state: 'authenticated',
        threadId: 351 },
     tableMap: { '113': [Object] },
     ready: true,
     useChecksum: false,
     binlog: { [Function: Binlog] super_: [Object] } },
  tableId: 113,
  flags: 1,
  useChecksum: false,
  numberOfColumns: 6,
  tableMap: 
   { '113': 
      { columnSchemas: [Object],
        parentSchema: 'sawyer',
        tableName: 'test_table',
        columns: [Object],
        pk: 'id',
        constraints: [Object] } },
  rows: 
   [ { id: 1,
       first_name: 'Frank',
       last_name: 'Lapidus',
       sex: 'M',
       dob: Sat Nov 29 1952 00:00:00 GMT-0500 (EST),
       nullable: 'not null' } ] }

Once started, setting options 'includeSchema' has no expected behaviour

Once the table is maped (after listening an table event) the rows event is always fired...

trace it to /lib/rows_event.js L:41 added the OR statement fixes this issue, but I'm not sure it it's the right place to do it

if(tableData === undefined
|| zongji._skipSchema(tableData.parentSchema, tableData.tableName)){
// TableMap event was filtered
parser._offset = parser._packetEnd;
this._filtered = true;

Travis CI Build no longer succeeds

The automated build for @vlasky 's PR to add the JSON datatype from MySQL 5.7 has failed but not due to his issue. Travis CI no longer has the libaio1 or libaio-dev packages available which are required to run MySQL server from the tarball as it currently configured. At this current point in time, the Travis CI build will fail if tried to run against master for this same reason.

I tried switching the tests to run using the Ubuntu Trusty (14.04) build environment, (it was the default, Precise, 12.04) but the packages are still not available.

After this roadblock, I found that there is an official repository for MySQL server DEB packages. In progress commit and currently failing test results. It gets to the point of running npm test but it's not clear why it fails. The next step will be to enable more logging on the test suite to see where it's failing.

The trouble with this approach is that these official repos only contain packages for MySQL 5.6 and 5.7. We will lose the test runner for MySQL 5.1 and 5.5 unless there's some way to get the libaio1 and libaio-dev packages installed.

I've got to sign off right now but I will keep trying to get this fixed as I have the time.

Support for MySQL Compressed Protocol

Zongji should support the MySQL compressed protocol when connecting to a MySQL database.

This greatly reduces bandwidth usage when Zongji is connecting over a network to a remote MySQL server as the binlog is a continuous stream and contains lots of redundant information, so it compresses very well.

This is not supported right now because Zongji relies on mysqljs:mysql which does not support the compressed protocol.

The newer, faster and almost 100% compatible library sidorares:node-mysql2 does support the compressed protocol, as well as SSL and other cool things.

Unfortunately, the current Zongji code is not compatible because of this line in lib/json_decode.js:

var Parser = require('mysql/lib/protocol/Parser');

The file mysql/lib/protocol/Parser.js that contains the definition of that Parser object does not exist in sidorares:node-mysql2.

A simple workaround may be to include both mysqljs:mysql and sidorares:node-mysql2 but this would not be elegant.

TypeError: Cannot read property 'COLUMN_NAME' of undefined

The example works without problems on OSX but on Ubuntu I get the following error:

/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:82
        throw err;
              ^
TypeError: Cannot read property 'COLUMN_NAME' of undefined
    at TableMap.updateColumnInfo (/root/livesql/node_modules/zongji/lib/binlog_event.js:164:29)
    at /root/livesql/node_modules/zongji/index.js:185:19
    at Query._callback (/root/livesql/node_modules/zongji/index.js:159:5)
    at Query.Sequence.end (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
    at Query._handleFinalResultPacket (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:143:8)
    at Query.EofPacket (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:127:8)
    at Protocol._parsePacket (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:271:23)
    at Parser.write (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:77:12)
    at Protocol.write (/root/livesql/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:39:16)
    at Socket.<anonymous> (/root/livesql/node_modules/zongji/node_modules/mysql/lib/Connection.js:82:28)

Here is the MySQL version on Ubuntu.

> mysql --version
mysql  Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (x86_64) using readline 6.3

The server closed the connection.

I want to use the latest version of zongji, but I can see the last release is already half year ago.
Any plan to release a new version? I keep getting The server closed the connection. for version 0.4.4.

Assumptions about order of tablemap

So I am getting messages and I see that CRUD operations are always preceded by the tablemap message. Is there any link (key) between the CRUD operation message and the tablemap message, or do I just need to assume that the order is guaranteed, and that the message the precedes a CRUD operation is always the tablemap message the it corresponds to? I just want to make sure my assumptions are right.

Thanks.

Cannot read property 'COLUMN_NAME' of undefined

Hey,

I have a repeatable issue where tableMap.columnSchemas is always empty, I have dumped out the TableMap instance just before the error occurs.

The error log:

/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:82
        throw err;
              ^
TypeError: Cannot read property 'COLUMN_NAME' of undefined
    at TableMap.updateColumnInfo (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/lib/binlog_event.js:165:29)
    at /home/rpitt/sync/MySQLSubscription/node_modules/zongji/index.js:185:19
    at Query._callback (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/index.js:159:5)
    at Query.Sequence.end (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
    at Query._handleFinalResultPacket (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:143:8)
    at Query.EofPacket (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:127:8)
    at Protocol._parsePacket (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:271:23)
    at Parser.write (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:77:12)
    at Protocol.write (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:39:16)
    at Socket.<anonymous> (/home/rpitt/sync/MySQLSubscription/node_modules/zongji/node_modules/mysql/lib/Connection.js:82:28)

The object:

{ timestamp: 1432656155000,
  nextPosition: 83341787,
  size: 42,
  tableMap: 
   { '15450': 
      { columnSchemas: [],
        parentSchema: 'platform',
        tableName: 'sessions' } },
  tableId: 15450,
  flags: 1,
  schemaName: 'platform',
  tableName: 'sessions',
  columnCount: 5,
  columnTypes: [ 254, 15, 3, 252, 17 ],
  columnsMetadata: 
   [ { max_length: 96 },
     { max_length: 96 },
     undefined,
     { length_size: 2 },
     { decimals: 0 } ] }

The schema:

CREATE TABLE `sessions` (
  `id` char(32) COLLATE utf8_bin NOT NULL,
  `name` varchar(32) COLLATE utf8_bin NOT NULL DEFAULT 'default',
  `account_id` int(11) DEFAULT NULL,
  `data` blob,
  `updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  KEY `updated` (`updated`),
  KEY `sess_to_acc` (`account_id`),
  CONSTRAINT `session_to_acc` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin

Regards.

After reboot MySQL, raise RangeError: index out of range

:\projects\nodejs\mysql_replication\node_modules\mysql\lib\protocol\Parser.js:78
        throw err; // Rethrow non-MySQL errors
        ^

RangeError: index out of range
    at checkOffset (buffer.js:696:11)
    at Buffer.readUInt8 (buffer.js:734:5)
    at readRow (F:\projects\nodejs\mysql_replication\node_modules\zongji\lib\rows_event.js:111:47)
    at WriteRows.RowsEvent._fetchOneRow (F:\projects\nodejs\mysql_replication\node_modules\zongji\lib\rows_event.js:95:10)
    at WriteRows.RowsEvent (F:\projects\nodejs\mysql_replication\node_modules\zongji\lib\rows_event.js:64:27)
    at new WriteRows (F:\projects\nodejs\mysql_replication\node_modules\zongji\lib\rows_event.js:126:13)
    at BinlogHeader.parse (F:\projects\nodejs\mysql_replication\node_modules\zongji\lib\packet\binlog_header.js:47:23)
    at Protocol._parsePacket (F:\projects\nodejs\mysql_replication\node_modules\mysql\lib\protocol\Protocol.js:262:12)
    at Parser.write (F:\projects\nodejs\mysql_replication\node_modules\mysql\lib\protocol\Parser.js:74:12)
    at Protocol.write (F:\projects\nodejs\mysql_replication\node_modules\mysql\lib\protocol\Protocol.js:39:16)

In function readRow, parser._offset is great than parser._buffer.length. I don't know how to fix this.

var readRow = function(tableMap, parser, emitter) {
  var row = {}, column, columnSchema;
  var nullBitmapSize = Math.floor((tableMap.columns.length + 7) / 8);
  var nullBuffer = parser._buffer.slice(parser._offset, 
                                        parser._offset + nullBitmapSize);                                     
  var curNullByte, curBit;
  parser._offset += nullBitmapSize;

  for (var i = 0; i < tableMap.columns.length; i++) {
    curBit = i % 8;
    if(curBit === 0) curNullByte = nullBuffer.readUInt8(Math.floor(i / 8));
    column = tableMap.columns[i];
    columnSchema = tableMap.columnSchemas[i];
    if((curNullByte & (1 << curBit)) === 0){
      row[column.name] =
        Common.readMysqlValue(parser, column, columnSchema, tableMap, emitter);
    }else{
      row[column.name] = null;
    }
  }
  return row;
};

Null value in JSON column causing buffer RangeError

When we insert a new row into a table with a JSON column that is empty or null, we get an error like this that causes MeteorJS to exit:

W20160505-15:47:28.917(10)? (STDERR) RangeError: Trying to access beyond buffer lengt
h
W20160505-15:47:28.918(10)? (STDERR) at checkOffset (buffer.js:582:11)
W20160505-15:47:28.918(10)? (STDERR) at Buffer.readUInt8 (buffer.js:588:5)
W20160505-15:47:28.918(10)? (STDERR) at parseBinaryBuffer (C:\Users\developer\App
Data\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\node_
modules\zongji\lib\json_decode.js:55:24)
W20160505-15:47:28.918(10)? (STDERR) at module.exports (C:\Users\developer\AppDat
a\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\node_mod
ules\zongji\lib\json_decode.js:28:25)
W20160505-15:47:28.919(10)? (STDERR) at Object.exports.readMysqlValue (C:\Users\d
eveloper\AppData\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-
select\node_modules\zongji\lib\common.js:446:16)
W20160505-15:47:28.919(10)? (STDERR) at UpdateRows._fetchOneRow (C:\Users\develop
er\AppData\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select
\node_modules\zongji\lib\rows_event.js:144:13)
W20160505-15:47:28.919(10)? (STDERR) at readRow (C:\Users\developer\AppData\Local
.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\node_modules\zo
ngji\lib\rows_event.js:113:16)
W20160505-15:47:28.919(10)? (STDERR) at UpdateRows.RowsEvent (C:\Users\developer
AppData\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\no
de_modules\zongji\lib\rows_event.js:64:27)
W20160505-15:47:28.919(10)? (STDERR) at new UpdateRows (C:\Users\developer\AppDat
a\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\node_mod
ules\zongji\lib\rows_event.js:136:13)
W20160505-15:47:28.919(10)? (STDERR) at BinlogHeader.parse (C:\Users\developer\Ap
pData\Local.meteor\packages\wj32_mysql\1.1.9\npm\node_modules\mysql-live-select\node
_modules\zongji\lib\packet\binlog_header.js:47:23)
=> Exited with code: 8

I believe the problem lies in line 55 of function parseBinaryBuffer() of /lib/json_decode.js

var jsonType = input.readUInt8(offset);

There are no prior checks to ensure that input is not empty before executing this line of code. I would fix this by inserting a line just above with the following check :

if (input.length === 0) return null;

Index out of range error on running tests

tests are failing with index out of range error.
update :
when running example.js (node example.js)

=== TableMap ===
Date: Wed Dec 28 2016 12:04:48 GMT+0530 (IST)
Next log position: 6376
Event size: 56
Table id: 113
Schema: test
Table: testTable
Columns: 12
Column types: [ 3, 15, 3, 3, 15, 16, 15, 3, 3, 17, 3, 17 ]
app/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:77
throw err; // Rethrow non-MySQL errors
^
RangeError: index out of range
at checkOffset (buffer.js:642:11)
at Buffer.readInt32BE (buffer.js:807:5)
at readIntBE (app/node_modules/zongji/lib/common.js:206:24)
at Object.exports.readMysqlValue (app/node_modules/zongji/lib/common.js:522:21)
at readRow (app/node_modules/zongji/lib/rows_event.js:113:16)
at WriteRows.RowsEvent._fetchOneRow (app/node_modules/zongji/lib/rows_event.js:95:10)
at WriteRows.RowsEvent (app/node_modules/zongji/lib/rows_event.js:64:27)
at new WriteRows (app/node_modules/zongji/lib/rows_event.js:123:13)
at BinlogHeader.parse (app/node_modules/zongji/lib/packet/binlog_header.js:47:23)
at Protocol._parsePacket (app/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:262:12)

Tests are not passed

When I try to run with npm test, it's failed

nevill@mba-nevill zongji(develop *) $ npm test

> [email protected] test /Users/nevill/working/mysql-binlog/zongji
> nodeunit --reporter=minimal test

events:
/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Parser.js:82
        throw err;
              ^
TypeError: Cannot read property 'rows' of undefined
    at /Users/nevill/working/mysql-binlog/zongji/test/events.js:73:29
    at Query._callback (/Users/nevill/working/mysql-binlog/zongji/test/helpers/querySequence.js:22:11)
    at Query.Sequence.end (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
    at Query._handleFinalResultPacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:143:8)
    at Query.OkPacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:77:10)
    at Protocol._parsePacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Protocol.js:271:23)
    at Parser.write (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Parser.js:77:12)
    at Protocol.write (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Protocol.js:39:16)
    at Socket.<anonymous> (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/Connection.js:82:28)
    at Socket.emit (events.js:95:17)
npm ERR! Test failed.  See above for more details.
npm ERR! not ok code 0

I also try to run them one by one. For example,

nevill@mba-nevill zongji(develop *) $ node_modules/.bin/nodeunit test/types.js

types.js
✖ set

AssertionError: 2 === 0
    at Object.strictEqual (/Users/nevill/working/mysql-binlog/zongji/node_modules/nodeunit/lib/types.js:83:39)
    at module.exports (/Users/nevill/working/mysql-binlog/zongji/test/helpers/expectEvents.js:10:8)
    at /Users/nevill/working/mysql-binlog/zongji/test/types.js:73:7
    at Query._callback (/Users/nevill/working/mysql-binlog/zongji/test/helpers/querySequence.js:22:11)
    at Query.Sequence.end (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
    at Query._handleFinalResultPacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:143:8)
    at Query.EofPacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/sequences/Query.js:127:8)
    at Protocol._parsePacket (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Protocol.js:271:23)
    at Parser.write (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Parser.js:77:12)
    at Protocol.write (/Users/nevill/working/mysql-binlog/zongji/node_modules/mysql/lib/protocol/Protocol.js:39:16)

@numtel Is there anything missing? Or MySQL version matters? (I'm testing against 5.6.13)

[Question] Zongji Usage on Slave

I'm trying to find a way to get real-time row events from a slave's relay log. Due to third party access controls, I do not have the ability to connect directly to the master.

Looking at the code for this library, the binlogName is derived from SHOW BINARY LOGS which in my case will not contain the events being replicated by the master, but only events initiated on the slave. All of the master replication row events are stored in the *-relay-log.00000X files on the slave server.

I'm thinking that because the relay log is in the same format as the binary log, it should be parsable by zongji with a bit of tweaking. i.e. specifying a configuration option for SLAVE and then retrieving the binLogName from SHOW SLAVE STATUS.

My question: is there something inherently wrong with this approach? Is my understanding of the MASTER/SLAVE and their associated binlog files correct? I'm relatively new to MySQL and want to make sure I'm not missing something obvious.

I'd be happy to submit a PR if my understanding of the problem is correct.

npm上的版本不可用

请尽快更新一个可用版本到npm上,我目前只能引用源代码。npm上的0.2.0版本是无法启动的

RangeError: Trying to access beyond buffer length

Every now and again I get this parsing error, thought I would send the trace in case you

W20150522-13:48:31.186(1)? (STDERR) 
W20150522-13:48:31.187(1)? (STDERR) /home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/node_modules/mysql/lib/protocol/Parser.js:82
W20150522-13:48:31.187(1)? (STDERR)         throw err;
W20150522-13:48:31.187(1)? (STDERR)               ^
W20150522-13:48:31.191(1)? (STDERR) RangeError: Trying to access beyond buffer length
W20150522-13:48:31.191(1)? (STDERR)     at checkOffset (buffer.js:582:11)
W20150522-13:48:31.191(1)? (STDERR)     at Buffer.readInt8 (buffer.js:631:5)
W20150522-13:48:31.191(1)? (STDERR)     at parseNewDecimal (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/common.js:227:28)
W20150522-13:48:31.191(1)? (STDERR)     at Object.exports.readMysqlValue (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/common.js:382:16)
W20150522-13:48:31.192(1)? (STDERR)     at readRow (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/rows_event.js:111:33)
W20150522-13:48:31.192(1)? (STDERR)     at WriteRows.RowsEvent._fetchOneRow (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/rows_event.js:94:10)
W20150522-13:48:31.192(1)? (STDERR)     at WriteRows.RowsEvent (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/rows_event.js:63:27)
W20150522-13:48:31.192(1)? (STDERR)     at new WriteRows (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/rows_event.js:121:13)
W20150522-13:48:31.192(1)? (STDERR)     at BinlogHeader.parse (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/lib/packet/binlog_header.js:47:23)
W20150522-13:48:31.193(1)? (STDERR)     at Protocol._parsePacket (/home/rpitt/sync/.meteor/local/isopacks/sync_tickets/npm/node_modules/zongji/node_modules/mysql/lib/protocol/Protocol.js:253:12)

leaking errors

am i doing something wrong?

var Zongji = require("zongji");

try {
    var zongi = new Zongji();
} catch (e) {
    console.log(e);
    console.log("this should be last line");
}
❱ node hmm1.js 
[TypeError: Cannot read property 'host' of undefined]
this should be last line
events.js:154
      throw er; // Unhandled 'error' event
      ^

Error: connect ECONNREFUSED 127.0.0.1:3306
    at Object.exports._errnoException (util.js:856:11)
    at exports._exceptionWithHostPort (util.js:879:20)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1053:14)
    --------------------
    at Protocol._enqueue (/data/projects/nodejs/hmm/node_modules/mysql/lib/protocol/Protocol.js:135:48)
    at Protocol.handshake (/data/projects/nodejs/hmm/node_modules/mysql/lib/protocol/Protocol.js:52:41)
    at Connection.connect (/data/projects/nodejs/hmm/node_modules/mysql/lib/Connection.js:109:18)
    at new ZongJi (/data/projects/nodejs/hmm/node_modules/zongji/index.js:15:23)
    at Object.<anonymous> (/data/projects/nodejs/hmm/hmm1.js:4:14)
    at Module._compile (module.js:413:34)
    at Object.Module._extensions..js (module.js:422:10)
    at Module.load (module.js:357:32)
    at Function.Module._load (module.js:314:12)
    at Function.Module.runMain (module.js:447:10)

License file added

Hey @nevill ,
I added a license file to cover us on the one function that I copied from node-mysql. Feel free to add your email address to the file.

20101a7

Completing the remaining fields

This project seems like it's going to be very useful for my project to bring "reactive" MySQL select statements to the Meteor framework (see my meteor-mysql repository).

I have begun working on getting the remaining fields handled as well as getting a test suite started. Please view my progress on this branch:

https://github.com/numtel/zongji/tree/more_fields

I've updated the readme with a table showing each field's status as well as instructions for getting the tests running.

  • Do you have any thoughts on progress?
  • Want one big PR or many small ones?
  • After the remaining fields are integrated, is there anything else keeping this rewrite from a 0.3.0 release?

Run multipe Zongji module

Hi everybody,
I'm writing different nodejs applications that use zongji module. I successfully monitor through binlog a mysql database when my applications run separately, but when I run them in the same time, the latter crash the first. As suggested in the documentation I set a unique serverId for each of this two applications.
The nodeJS source code share the same node_modules directory. I'm running nodeJS v4.4.3 on Ubuntu 14.04.3 LTS and 10.1.9-MariaDB on xampp-linux-5.6.15-2.

utf8mb4 charset throws an error

There is an issue when a MySQL column uses the utf8mb4 charset (which is now becoming standard) - iconv-lite doesn't recognize it as a valid charset, so it returns an error:

Error: Encoding not recognized: 'utf8mb4' (searched as: 'utf8mb4')

Note that MySQL's utf8 is only a partial implementation of the UTF-8 encoding, and utf8mb4 (introduced in MySQL 5.5.3) is the complete implementation.

The easiest fix is to map "utf8mb4" to "utf8" before calling iconv.decode in

result = iconv.decode(result, column.charset);
. Something like:

column.charset == "utf8mb4" ? "utf8" : column.charset

start() options ignored if server already connected

I tried something like:

const zongji = new ZongJi({
  host: argv['source-host'],
  user: argv['source-user'],
  password: argv['source-password'],
  charset: 'utf8mb4'
});

someAsyncCode(function () {
    zongji.start({
      startAtEnd: true,
      includeEvents: ['tablemap', 'writerows', 'updaterows', 'deleterows', 'rotate'],
      includeSchema: includeSchema
    });
});

I realized that my options (including startAtEnd) were ignored by ZongJi. I suspect it's because the connection is already established by the time I call .start(). It's not a critical problem but it'd be great if it was mentioned in the doc.

Thanks for the great work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.