facebook / mysql-5.6 Goto Github PK
View Code? Open in Web Editor NEWFacebook's branch of the Oracle MySQL v5.6 database. This includes MyRocks.
Home Page: http://myrocks.io
License: GNU General Public License v2.0
Facebook's branch of the Oracle MySQL v5.6 database. This includes MyRocks.
Home Page: http://myrocks.io
License: GNU General Public License v2.0
MySQL Server 5.6 This is a release of MySQL, a dual-license SQL database server. For the avoidance of doubt, this particular copy of the software is released under the version 2 of the GNU General Public License. MySQL is brought to you by Oracle. Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. License information can be found in the COPYING file. MySQL FOSS License Exception We want free and open source software applications under certain licenses to be able to use specified GPL-licensed MySQL client libraries despite the fact that not all such FOSS licenses are compatible with version 2 of the GNU General Public License. Therefore there are special exceptions to the terms and conditions of the GPLv2 as applied to these client libraries, which are identified and described in more detail in the FOSS License Exception at <http://www.mysql.com/about/legal/licensing/foss-exception.html>. This distribution may include materials developed by third parties. For license and attribution notices for these materials, please refer to the documentation that accompanies this distribution (see the "Licenses for Third-Party Components" appendix) or view the online documentation at <http://dev.mysql.com/doc/>. GPLv2 Disclaimer For the avoidance of doubt, except that if any license choice other than GPL or LGPL is available it will apply instead, Oracle elects to use only the General Public License version 2 (GPLv2) at this time for any software where a choice of GPL license versions is made available with the language indicating that GPLv2 or any later version may be used, or where a choice of which version of the GPL is applied is otherwise unspecified. For further information about MySQL or additional documentation, see: - The latest information about MySQL: http://www.mysql.com - The current MySQL documentation: http://dev.mysql.com/doc Some Reference Manual sections of special interest: - If you are migrating from an older version of MySQL, please read the "Upgrading from..." section. - To see what MySQL can do, take a look at the features section. - For installation instructions, see the Installing and Upgrading chapter. - For the new features/bugfix history, see the MySQL Change History appendix. You can browse the MySQL Reference Manual online or download it in any of several formats at the URL given earlier in this file. Source distributions include a local copy of the manual in the Docs directory.
I complied "webscalesql-5.6.24.97"
And I changed system variables
mysql> show variables like '%son%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| end_markers_in_json | ON |
| use_fbson_input_format | ON |
| use_fbson_output_format | ON |
+-------------------------+-------+
3 rows in set (0.00 sec)
mysql> show variables like '%document%';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| allow_document_type | ON |
+---------------------+-------+
1 row in set (0.00 sec)
then,
mysql> show create table tt;
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tt | CREATE TABLE tt
(
id
int(11) NOT NULL,
doc
document NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY id_doc
(id
,doc
.address
.zipcode
AS INT)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> insert into tt values (100, '{"name":"Tom","age":30,"married":false,"address":{"houseNumber":1001,"streetName":"main","zipcode":"98761","state":"CA"},"cars":["F150","Honda"],"memo":null}');
Query OK, 1 row affected (0.00 sec)
mysql> select id, doc.name from tt where doc.address.zipcode like '98761';
Empty set (0.00 sec)
mysql> select * from tt;
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | doc |
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 100 | {"name":"Tom","age":30,"married":false,"address":{"houseNumber":1001,"streetName":"main","zipcode":"98761","state":"CA"},"cars":["F150","Honda"],"memo":null} |
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql>
I can not find data using document path.
What should I do?
Issue by yoshinorim
Monday May 04, 2015 at 22:17 GMT
Originally opened as MySQLOnRocksDB#62
Currently sst files are stored under $datadir/rocksdb and it's hard coded as 'std::string rocksdb_db_name= "./rocksdb"'. This needs to be configurable.
Issue by maykov
Thursday Apr 23, 2015 at 21:45 GMT
Originally opened as MySQLOnRocksDB#55
This will make it easier to debug issues and write tests
Issue by maykov
Thursday Apr 02, 2015 at 06:35 GMT
Originally opened as MySQLOnRocksDB#50
This function modifies ddl_hash and index_num_to_keydef.
However, these changes need to be reversed if dict_manager.commit fails
See the discussion here: https://reviews.facebook.net/D35925#inline-259167
the buf_pool->LRU is NULL result the core.
Issue by jonahcohen
Wednesday Jan 07, 2015 at 22:58 GMT
Originally opened as MySQLOnRocksDB#6
From @mdcallag:
We currently provide read committed. We need to define and implement repeatable read. See https://github.com/MariaDB/webscalesql-5.6/wiki/Cursor-Isolation
This is a placeholder for now.
Hi,
I got an error while trying to compile from source code. And after commenting out the conflict codes, everything goes well ...
bellow is the error message:
./libmysqld.a(zutil.c.o):(.data.rel.ro.local+0x0): multiple definition of z_errmsg' ../libmysqld.a(zutil.c.o):(.data.rel.ro.local+0x0): first defined here ../libmysqld.a(adler32.c.o): In function
adler32_combine_':
/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/storage/innobase/zlib_embedded/adler32.c:141: multiple definition of adler32_combine' ../libmysqld.a(adler32.c.o):/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/zlib/adler32.c:138: first defined here ../libmysqld.a(crc32.c.o): In function
crc32_combine':
/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/storage/innobase/zlib_embedded/crc32.c:432: multiple definition of `crc32_combine'
../libmysqld.a(crc32.c.o):/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/zlib/crc32.c:381: first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [libmysqld/examples/mysql_embedded] Error 1
cmake version 2.8.12
I installed devtoolset-1.1
gcc version 4.7.2-5
glibc 2.12-1
OS Red hat Enterprise Linux Server release 6.7
cmake command
cmake -DCMAKE_INSTALL_PREFIX=/db/mysql-5.6-webscalesql-5.6.24.97
-DSYSCONFDIR=/etc
-DMYSQL_TCP_PORT=3306
-DDEFAULT_CHARSET=utf8
-DENABLED_LOCAL_INFILE=1
-DWITH_EXTRA_CHARSETS=all
-DDEFAULT_COLLATION=utf8_general_ci
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock
-DMYSQL_DATADIR=/data/mysql
-DWITH_SSL=system
-DENABLE_DOWNLOADS=1
cmake output
..
..
-- Performing Test HAVE_LLVM_LIBCPP - Failed
..
..
..
-- Check size of int8 - failed
-- Check size of int16 - failed
-- Check size of uint8 - failed
-- Check size of uint16 - failed
-- Check size of int32 - failed
-- Check size of uint32 - failed
-- Check size of int64 - failed
-- Check size of uint64 - failed
-- Check size of bool - failed
..
-- Performing Test TIME_T_UNSIGNED - Failed
..
-- Performing Test HAVE_TIMESPEC_TS_SEC - Failed
..
--Performing Test HAVE_SOLAROS_STYLE_GETHOST - Failed
..
..
--Configuring done
--Generating done
-- Build files have been written to: /db/mysql-5.6-webscalesql-5.6.24.97
Issue by maykov
Tuesday Jul 28, 2015 at 22:03 GMT
Originally opened as MySQLOnRocksDB#94
We experienced 5% slowdown when started compiling withou -fno-rtti switch.
Oracle enabled rtti in 5.6.4: http://dev.mysql.com/worklog/task/?id=5825
commit: mysql/mysql-server@a5ee727
They haven't started using rtti until 5.7 though.
Excerpt:
<<<A separate test of removing the -fno-exceptions and the -fno-rtti flags
shows that there is no significant difference in execution time between
having and not having these flags.>>>
Issue by spetrunia
Friday Sep 04, 2015 at 20:43 GMT
Originally opened as MySQLOnRocksDB#107
Finally figured out why some DELETE queries got very slow (about 100x slower) after fix for #86.
create table t4 (
id int, value int, value2 varchar(200),
primary key (id) comment 'rev:cf_i3',
index(value) comment 'rev:cf_i3'
) engine=rocksdb;
Consider a query:
delete from t4 where id <= 3000;
EXPLAIN is:
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | t4 | range | PRIMARY | PRIMARY | 4 | const | 1 | Using where |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
MySQL will use the following algorithm
h->index_init(PRIMARY);
for (res=h->index_first(); res != EOF && table.id<3000 ; h->index_next())
{
h->delete_row(); // deletes the row we've just read
}
The table uses reverse column families, so this translates into these RocksDB
calls:
trx= db->BeginTransaction();
iter= trx->NewIterator();
iter->Seek(index_number);
while ()
{
if (!iter->Valid() || key_value_in_table(iter->key()))
{
// No more rows in the range
break;
}
rowkey= iter->key();
trx->Delete(rowkey); // (*)
iter->Prev(); // (**)
}
Note the lines () and (*).
include/rocksdb/utilities/transaction.h has this comment:
// The returned iterator is only valid until Commit(), Rollback(), or
// RollbackToSavePoint() is called.
// NOTE: Transaction::Put/Merge/Delete will currently invalidate this iterator
// until
// the following issue is fixed:
// https://github.com/facebook/rocksdb/issues/616
virtual Iterator* GetIterator(const ReadOptions& read_options) = 0;
I assume it refers to this comment in issue #616:
it is not safe to mutate the WriteBatchWithIndex while iterating through
the iterator generated by NewIteratorWithBase()
So I implemented 'class Stabilized_iterator', which wraps the iterator returned
by GetIterator(), but keeps itself valid across Put/Merge/Delete calls.
It does so by
This works, but in the above scenario it is very slow.
Here's why. The table is in reverse-ordered CF, so it stores the data in this physical order:
TABLE
row10K
...
row03
row02
row01
row00
another-table-row
However, DELETE works in the logical order. First it deletes row00, then row01, etc. Eventually, Transaction's WriteBatchWithIndex has:
kDeletedRecord row03
kDeletedRecord row02
kDeletedRecord row01
kDeletedRecord row00
We read row04. We call "trx->Delete(row04)", and the WriteBatchWithIndex now is:
kDeletedRecord row04
kDeletedRecord row03
kDeletedRecord row02
kDeletedRecord row01
kDeletedRecord row00
Then, we call iter->Prev() (line (**)). Stabilized_iterator notes that its underlying iterator is invalidated. In orer to restore it, it calls
backend_iter->Seek(row04).
This operation finds row04 in the table, but it also sees {kDeletedRecord, row04} in the WriteBatchWithIndex. It advances both of its underlying iterators, until it reaches another-table-row.
Then, Stabilized_iterator calls backend_iter->Prev(). In this call, the iterator walks back through the pairs of row00, row01, ... row04, until it finds row05 in the base table.
This works, but if one deletes N rows then it's O(N^2) operations.
Issue by yoshinorim
Wednesday Aug 26, 2015 at 23:58 GMT
Originally opened as MySQLOnRocksDB#101
This will be useful to support multiple KV format versions (including supporting compatibility by fixing format bugs). Details are posted at: https://www.facebook.com/groups/mysqlonrocksdb/permalink/977329698997886/
Issue by maykov
Tuesday Feb 03, 2015 at 16:43 GMT
Originally opened as MySQLOnRocksDB#31
Issue by maykov
Monday Jul 20, 2015 at 21:59 GMT
Originally opened as MySQLOnRocksDB#92
To repro this, you need to build and run with perf schema enabled.
truncate table performance_schema.events_waits_summary_by_instance; truncate table performance_schema.events_waits_summary_global_by_event_name; select * from performance_schema.events_waits_summary_global_by_event_name where event_name like '%rocksdb%';select * from performance_schema.events_waits_summary_by_instance where event_name like '%rocksdb%';
You will see all zeroes as it should be.
Repeat the select portion of above statements in a few seconds. Values for wait/synch/cond/rocksdb/cond_stop will be much higher. However, MIN_TIMER_WAIT field in the _by_event_name will stay at zero forever, while _by_instance table will reflect the actual minimum wait time.
This causes the perfschema.aggregate test to break.
Few more questions: Why values for cond_stop are so huge ? (eg 1105186494971650) Are they in CPU ticks?
Why do all other rocksdb counters stay at zero?
Issue by maykov
Wednesday Feb 25, 2015 at 03:38 GMT
Originally opened as MySQLOnRocksDB#39
Here is the list of tests which fail:
(tests prefixed with # are fixed)
funcs_1.is_columns_mysql : This test fails on RocksDB SE
funcs_1.is_tables_mysql : This test fails on RocksDB SE
innodb.innodb_bug59641 : This test fails on RocksDB SE
innodb.innodb-index-online-fk : This test fails on RocksDB SE
innodb.innodb-system-table-view : This test fails on RocksDB SE
innodb.innodb-tablespace : This test fails on RocksDB SE
innodb_stress.innodb_stress_blob_zipdebug_zlib : This test fails on RocksDB SE
innodb_stress.innodb_stress_mix : This test fails on RocksDB SE
innodb_zip.innodb_16k : This test fails on RocksDB SE
main.bootstrap : This test fails on RocksDB SE
main.connect : This test fails on RocksDB SE
main.hll : This test fails on RocksDB SE
main.innodb_report_age_of_evicted_pages : This test fails on RocksDB SE
main.mysqlbinlog_gtid : This test fails on RocksDB SE
main.mysqld--help-notwin-profiling : This test fails on RocksDB SE
main.mysqld--help-notwin : This test fails on RocksDB SE
main.openssl_1 : This test fails on RocksDB SE
main.plugin_auth_qa_1 : This test fails on RocksDB SE
main.plugin_auth_sha256_server_default_tls : This test fails on RocksDB SE
main.plugin_auth_sha256_tls : This test fails on RocksDB SE
main.rocksdb : This test fails on RocksDB SE
main.ssl_8k_key : This test fails on RocksDB SE
main.ssl_cipher : This test fails on RocksDB SE
main.ssl_compress : This test fails on RocksDB SE
main.ssl_connections_count : This test fails on RocksDB SE
main.ssl_connect : This test fails on RocksDB SE
main.ssl : This test fails on RocksDB SE
main.temp_table_cleanup : This test fails on RocksDB SE
perfschema.aggregate : This test fails on RocksDB SE
perfschema.hostcache_ipv4_auth_plugin : This test fails on RocksDB SE
perfschema.hostcache_ipv6_auth_plugin : This test fails on RocksDB SE
perfschema.no_threads : This test fails on RocksDB SE
perfschema.pfs_upgrade_event : This test fails on RocksDB SE
perfschema.pfs_upgrade_func : This test fails on RocksDB SE
perfschema.pfs_upgrade_proc : This test fails on RocksDB SE
perfschema.pfs_upgrade_table : This test fails on RocksDB SE
perfschema.pfs_upgrade_view : This test fails on RocksDB SE
perfschema.start_server_disable_idle : This test fails on RocksDB SE
perfschema.start_server_disable_stages : This test fails on RocksDB SE
perfschema.start_server_disable_statements : This test fails on RocksDB SE
perfschema.start_server_disable_waits : This test fails on RocksDB SE
perfschema.start_server_innodb : This test fails on RocksDB SE
perfschema.start_server_no_account : This test fails on RocksDB SE
perfschema.start_server_no_cond_class : This test fails on RocksDB SE
perfschema.start_server_no_cond_inst : This test fails on RocksDB SE
perfschema.start_server_no_file_class : This test fails on RocksDB SE
perfschema.start_server_no_file_inst : This test fails on RocksDB SE
perfschema.start_server_no_host : This test fails on RocksDB SE
perfschema.start_server_no_mutex_class : This test fails on RocksDB SE
perfschema.start_server_no_mutex_inst : This test fails on RocksDB SE
perfschema.start_server_no_rwlock_class : This test fails on RocksDB SE
perfschema.start_server_no_rwlock_inst : This test fails on RocksDB SE
perfschema.start_server_no_setup_actors : This test fails on RocksDB SE
perfschema.start_server_no_setup_objects : This test fails on RocksDB SE
perfschema.start_server_no_socket_class : This test fails on RocksDB SE
perfschema.start_server_no_socket_inst : This test fails on RocksDB SE
perfschema.start_server_no_stage_class : This test fails on RocksDB SE
perfschema.start_server_no_stages_history_long : This test fails on RocksDB SE
perfschema.start_server_no_stages_history : This test fails on RocksDB SE
perfschema.start_server_no_statement_class : This test fails on RocksDB SE
perfschema.start_server_no_statements_history_long : This test fails on RocksDB SE
perfschema.start_server_no_statements_history : This test fails on RocksDB SE
perfschema.start_server_no_table_hdl : This test fails on RocksDB SE
perfschema.start_server_no_table_inst : This test fails on RocksDB SE
perfschema.start_server_nothing : This test fails on RocksDB SE
perfschema.start_server_no_thread_class : This test fails on RocksDB SE
perfschema.start_server_no_thread_inst : This test fails on RocksDB SE
perfschema.start_server_no_user : This test fails on RocksDB SE
perfschema.start_server_no_waits_history_long : This test fails on RocksDB SE
perfschema.start_server_no_waits_history : This test fails on RocksDB SE
perfschema.start_server_off : This test fails on RocksDB SE
perfschema.start_server_on : This test fails on RocksDB SE
rpl.rpl_alter_repository : This test fails on RocksDB SE
rpl.rpl_change_master_crash_safe : This test fails on RocksDB SE
rpl.rpl_dynamic_ssl : This test fails on RocksDB SE
rpl.rpl_gtid_crash_safe : This test fails on RocksDB SE
rpl.rpl_heartbeat_ssl : This test fails on RocksDB SE
rpl.rpl_innodb_bug68220 : This test fails on RocksDB SE
rpl.rpl_master_connection : This test fails on RocksDB SE
rpl.rpl_row_crash_safe : This test fails on RocksDB SE
rpl.rpl_ssl1 : This test fails on RocksDB SE
rpl.rpl_ssl : This test fails on RocksDB SE
rpl.rpl_stm_mixed_mts_rec_crash_safe_small : This test fails on RocksDB SE
sys_vars.all_vars : This test fails on RocksDB SE
Issue by jonahcohen
Wednesday Jan 07, 2015 at 23:04 GMT
Originally opened as MySQLOnRocksDB#12
From @mdcallag:
This should avoid doing a table copy when adding an index. See http://dev.mysql.com/doc/refman/5.5/en/innodb-create-index.html
Issue by spetrunia
Friday Apr 24, 2015 at 00:50 GMT
Originally opened as MySQLOnRocksDB#56
(branching this off from issue #26)
Currently, index-only scans are not supported for column type DOUBLE.
Testcase:
create table t31 (pk int auto_increment primary key, key1 double, key(key1)) engine=rocksdb;
insert into t31 values (),(),(),(),(),(),(),();
explain select key1 from t31 where key1=1.234;
It is actually possible to restore double from its mem-comparable form and thus support index-only scans.
See filesort.cc: void change_double_for_sort(double nr,uchar *to) for the code that needs to be inverted.
Hi Facebook.
Really thanks for your awesome features.
I really interested in defragmentation of InnoDB table.
But for the huge table, we can't split defragmentation job.
Yes for the index level But if PRIMARY KEY itself is huge, we have to run defragmentation whole day.
So, I think defragmentation job can be splitted by partition level if the table is partitioned.
But current facebook version of mysql doesn't support for the partitioned table.
I think it can be implement easily (Just think) and I want to try.
Before implementation, You can also implement it easily (you are expert than me at least ^^), but you didn't.
I thought there's enough reason you did't implement it. Please share the reason to me ?
Really Thanks.
Issue by yoshinorim
Thursday Jul 16, 2015 at 01:00 GMT
Originally opened as MySQLOnRocksDB#91
Optionally, check consistency between all frm files and all MyRocks data dictionary entries, and aborting if there is any mismatch.
Alexey suggested that I assign this issue to you. We're seeing occasional failures of this test in our CI testing.
rocksdb.rocksdb w9 [ fail ]
Test ended at 2015-09-16 21:23:21
CURRENT_TEST: rocksdb.rocksdb
--- /data/users/jenkins/workspace/github-mysql-nightly/BUILD_TYPE/ASan/CLIENT_MODE/Async/PAGE_SIZE/32/TEST_SET/MixedOtherBig/label/mysql/mysql/mysql-test/suite/rocksdb/r/rocksdb.result 2015-09-17 06:20:07.937508115 +0300
+++ /data/users/jenkins/workspace/github-mysql-nightly/BUILD_TYPE/ASan/CLIENT_MODE/Async/PAGE_SIZE/32/TEST_SET/MixedOtherBig/label/mysql/mysql/_build-5.6-ASan/mysql-test/var/9/log/rocksdb.reject 2015-09-17 07:23:20.339521073 +0300
@@ -1731,7 +1731,7 @@
select if((@var2 - @var1) < 30, 1, @[email protected]);
if((@var2 - @var1) < 30, 1, @[email protected])
-1
+93
drop table t0,t1;
mysqltest: Result content mismatch
Issue by spetrunia
Friday Jan 30, 2015 at 12:54 GMT
Originally opened as MySQLOnRocksDB#25
Currently, index-only scans are supported for
for other collations (eg. case-insensitive, _ci collations), index-only scans are not supported. The reason for this is that is not possible to restore the original column value mem-comparable key. For example, in latin_general_ci both 'foo', 'Foo', and 'FOO' have mem-comparable form 'FOO'.
A possible solution could work like this:
See also:
Diffs:
https://reviews.facebook.net/D58269
https://reviews.facebook.net/D58503
https://reviews.facebook.net/D58875
Issue by hermanlee
Wednesday May 27, 2015 at 00:17 GMT
Originally opened as MySQLOnRocksDB#72
Return key_skipped statistics captured in rocksdb::perf_context in the slow query log.
Issue by jkedgar
Friday Sep 04, 2015 at 17:29 GMT
Originally opened as MySQLOnRocksDB#106
Mark Callaghan requested the following pieces of information to be available through SHOW ENGINE ROCKSDB TRANSACTION STATUS:
These may depend on new features in RocksDB.
Issue by maykov
Wednesday Aug 19, 2015 at 17:01 GMT
Originally opened as MySQLOnRocksDB#97
Right now, there is a delay of 1 hour between stat computes when using default options. This makes MyRocks fail (run slow) on standard tests such as Wisconsin. We need to fix this.
E.g.: i.e. updating stats on table X (including Memstore) if all of the indexes of the table are less than Y bytes, whenever the table was accessed.
Issue by yoshinorim
Friday May 29, 2015 at 00:18 GMT
Originally opened as MySQLOnRocksDB#74
Some options like rocksdb_bytes_per_sync should be configurable dynamically.
Issue by spetrunia
Tuesday Sep 01, 2015 at 23:02 GMT
Originally opened as MySQLOnRocksDB#105
MyRocks needs to release row locks in some cases:
All locks are recursive. A lock may be acquired multiple times. When MyRocks wants to release locks taken by the statement, it only means un-doing the locking actions done by this statement. All the locking done before the last statement must remain.
(Current implementation in MyRocks was done in issue #57).
When a statement inside a transaction fails, MyRocks will make these calls:
txn->SetSavePoint(); // Called when a statement starts
... statement actions like txn->Put(), txn->Delete(), txn->GetForUpdate()
txn->RollbackToSavePoint(); // Called if the statement failed.
As far as I understood Antony @agiardullo 's suggestion, it was:
Make txn->RollbackToSavePoint() also undo all locking actions done since the last txn->SetSavePoint() call.
This will work.
This is used to release the lock that was obtained when reading the last row. From MyRocks point of view, it is sufficient if this TransactionDBImpl
's function was exposed in TransactionDB
class:
void UnLock(TransactionImpl* txn, uint32_t cfh_id, const std::string& key);
MyRocks always knows which Column Family was used, and the key is saved in ha_rocksdb::last_rowkey
.
However, current implementation in TransactionDBImpl::UnLock is not sufficient. As far as I understand it is not recursive: one can call TryLock() multiple times, and then a single UnLock() call will fully release the lock. MyRocks needs last UnLock() call to only undo the effect of the last TryLock() call.
Hello,
I want to compile mysql for facebook without rocksdb.
What do you use when executing cmake with?
for example -DWITH_ROCKSDB_STORAGE_ENGINE=0
And,
I want to test document type for json data.
So, I think It is not problem compiling source without rocksdb.
Is it right?
Thx.
Issue by spetrunia
Tuesday Jun 30, 2015 at 17:57 GMT
Originally opened as MySQLOnRocksDB#87
This task is about adding support for online (and/or in-place) DDL operations for MyRocks.
SQL layer will make the following calls:
h->check_if_supported_inplace_alter() // = HA_ALTER_INPLACE...
h->prepare_inplace_alter_table()
...
h->commit_inplace_alter_table
The first is to inquire whether the storage engine supports in-place operation for the given ALTER TABLE command. The latter are to actually made the change.
Issue by spetrunia
Monday Feb 02, 2015 at 18:39 GMT
Originally opened as MySQLOnRocksDB#26
Currently, index scans for DATETIME, TIMESTAMP, and DOUBLE are not supported
Testcase:
create table t31 (pk int auto_increment primary key, key1 double, key(key1)) engine=rocksdb;
insert into t31 values (),(),(),(),(),(),(),();
explain select key1 from t31 where key1=1.234;
create table t32 (pk int auto_increment primary key, key1 datetime, key(key1))engine=rocksdb;
insert into t32 values (),(),(),(),(),(),(),();
explain select key1 from t32 where key1='2015-01-01 00:11:12';
create table t33 (pk int auto_increment primary key, key1 timestamp, key(key1))engine=rocksdb;
insert into t33 values (),(),(),(),(),(),(),();
explain select key1 from t33 where key1='2015-01-01 00:11:12';
This task is about to support them.
DATETIME/TIMESTAMP use Field_temporal_with_date_and_timef::make_sort_key, which just does memcpy().
DOUBLE uses change_double_for_sort(), we will need to code a reverse function.
Hi:
I am trying to cmake . && make but unfortunately failed to make it.
error message:
/u01/project/mysql-5.6/sql/item_func.h:31: error: ‘std::isfinite’ has not been declared
Look forward to your reply;
thanks.
Issue by MrDimension
Thursday May 28, 2015 at 03:00 GMT
Originally opened as MySQLOnRocksDB#73
The error occurred during "make" process. And I got the error message below:
// ==========================================================
/u01/weidu.lww/myrocks/vio/viosslfactories.c: In function ‘new_VioSSLFd’:
/u01/weidu.lww/myrocks/vio/viosslfactories.c:266:3: warning: implicit declaration of function ‘ERR_clear_error’ [-Wimplicit-function-declaration]
ERR_clear_error();
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:5: error: unknown type name ‘EC_KEY’
EC_KEY ecdh = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:5: warning: implicit declaration of function ‘EC_KEY_new_by_curve_name’ [-Wimplicit-function-declaration]
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:45: error: ‘NID_X9_62_prime256v1’ undeclared (first use in this function)
EC_KEY *ecdh = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:45: note: each undeclared identifier is reported only once for each function it appears in
/u01/weidu.lww/myrocks/vio/viosslfactories.c:309:7: warning: implicit declaration of function ‘SSL_CTX_set_tmp_ecdh’ [-Wimplicit-function-declaration]
SSL_CTX_set_tmp_ecdh(ssl_fd->ssl_context, ecdh);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:310:7: warning: implicit declaration of function ‘EC_KEY_free’ [-Wimplicit-function-declaration]
EC_KEY_free(ecdh);
^
make[2]: ** [vio/CMakeFiles/vio.dir/viosslfactories.c.o] Error 1
make[1]: *** [vio/CMakeFiles/vio.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [all] Error 2
// ==========================================================
I found the code in vio directory is a little different from the official version of mysql 5.6. When I compile the official version of mysql 5.6, there is no error about this. After a lot of trying, I guess the reason may be about openssl. Did someone meet the same problem? What can I do to solve the problem?
(ps: my gcc version is gcc-4.9.2)
Issue by jonahcohen
Wednesday Jan 07, 2015 at 23:01 GMT
Originally opened as MySQLOnRocksDB#8
From @mdcallag:
This requires more discussion but many of us are in favor of it.
It might be time to use the binlog as the source of truth to avoid the complexity and inefficiency of keeping RocksDB and the binlog synchronized via internal XA. There are two modes for this. The first mode is durable in which case fsync is done after writing the binlog and RocksDB WAL. The other mode is non-durable in which case fsync might only be done once per second and we rely on lossless semisync to recover. Binlog as source of truth might have been discussed on a MariaDB mail list many years ago - https://lists.launchpad.net/maria-developers/msg01998.html
Some details are at http://yoshinorimatsunobu.blogspot.com/2014/04/semi-synchronous-replication-at-facebook.html
The new protocol will be:
When lossless semisync is used we skip steps 2 and 4. When lossless semisync is not used we do step 2 and skip 3. Step 4 is optional. Recovery in this case is done by:
When running in non durable mode, then on a crash one of the following is true where the relation describes which one has more commits:
I am trying to build https://github.com/facebook/mysql-5.6 on CentOS 6.4 and the build is failing(I am using devtools-2)
Scanning dependencies of target merge_large_tests-t
[ 82%] Building CXX object unittest/gunit/CMakeFiles/merge_large_tests-t.dir/merge_large_tests.cc.o
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogBasic_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:84:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogThresholdChange_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:122:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogSuppressCount_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:154:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
At global scope:
cc1plus: warning: unrecognized command line option "-Wno-null-dereference" [enabled by default]
make[2]: *_* [unittest/gunit/CMakeFiles/merge_large_tests-t.dir/merge_large_tests.cc.o] Error 1
make[1]: *** [unittest/gunit/CMakeFiles/merge_large_tests-t.dir/all] Error 2
make: *** [all] Error 2
Am I missing something? Please let me know if there is a mailing list.
I basically need a non-blocking mysqlclient(C bindings).
Thanks for your patience !
Issue by spetrunia
Wednesday Jun 17, 2015 at 21:02 GMT
Originally opened as MySQLOnRocksDB#86
RocksDB's pessimistic transaction system handles locking and also takes care of storing not-yet-committed changes made by the transaction. That is, it has two counterparts in MyRocks:
If we just replace #.1, there will be data duplication (Row_table will have the same data as WriteBatchWithIndex).
At start, we call
transaction->SetSnapshot()
this gives us:
+ // If SetSnapshot() is used, then any attempt to read/write a key in this
+ // transaction will fail if either another transaction has this key locked
+ // OR if this key has been written by someone else since the most recent
+ // time SetSnapshot() was called on this transaction.
then, reading, modifying and writing a key can be done with simple
rocksdb_trx->Get()
... modify the record as needed
rocksdb_trx->Put()
SELECT ... LOCK IN SHARE MODE
. There seems to be no way to achieve shared read locks in the API. Issue by maykov
Tuesday Apr 14, 2015 at 22:48 GMT
Originally opened as MySQLOnRocksDB#52
STL relies on exceptions to handle OOM. We should check that opreator new is not overloaded to return NULL. We should also add exception handlers into all high level handler functions.
Issue by yoshinorim
Friday Aug 14, 2015 at 15:03 GMT
Originally opened as MySQLOnRocksDB#96
SAVEPOINT is needed for https://bugs.mysql.com/bug.php?id=71017. For bug71017, SAVEPOINT for read only transaction is good enough. It is needed to define behavior if there is any updates.
On trunk/5.6.12 I keep hitting this crash when running random queries involving INDEX_STATISTICS.
(gdb) bt
#0 in pthread_kill () from /lib64/libpthread.so.0
#1 in handle_fatal_signal (sig=11) at ./trunk/sql/signal_handler.cc:248
#2 <signal handler called>
#3 in innobase_update_index_stats (table_stats=0x7f53740766a0) at ./trunk/storage/innobase/handler/ha_innodb.cc:4051
#4 in get_index_stats_handlerton at ./trunk/sql/handler.cc:858
#5 in plugin_foreach_with_mask at ./trunk/sql/sql_plugin.cc:2094
#6 in ha_get_index_stats at ./trunk/sql/handler.cc:866
#7 in fill_index_stats at ./trunk/sql/table_stats.cc:681
#8 in do_fill_table at ./trunk/sql/sql_show.cc:7195
#9 in get_schema_tables_result at ./trunk/sql/sql_show.cc:7296
#10 in JOIN::prepare_result at ./trunk/sql/sql_select.cc:844
#11 in JOIN::exec at ./trunk/sql/sql_executor.cc:116
#12 in mysql_execute_select at ./trunk/sql/sql_select.cc:1121
#13 in mysql_select at ./trunk/sql/sql_select.cc:1242
#14 in handle_select at ./trunk/sql/sql_select.cc:125
#15 in execute_sqlcom_select at ./trunk/sql/sql_parse.cc:5534
#16 in mysql_execute_command at ./trunk/sql/sql_parse.cc:2969
#17 in mysql_parse at ./trunk/sql/sql_parse.cc:6694
#18 in dispatch_command at ./trunk/sql/sql_parse.cc:1402
#19 in do_command at ./trunk/sql/sql_parse.cc:1047
#20 in do_handle_one_connection at ./trunk/sql/sql_connect.cc:1001
#21 in handle_one_connection at ./trunk/sql/sql_connect.cc:917
#22 in pfs_spawn_thread at ./trunk/storage/perfschema/pfs.cc:1855
#23 in start_thread from /lib64/libpthread.so.0
#24 in clone from /lib64/libc.so.6
(gdb) p mysql_parse::thd->query_string
$1 = {
string = {
str = 0x7f5374006c40 "select \t ROUTINE_BODY from\n\t`information_schema`.`INNODB_SYS_FOREIGN` as `INNODB_SYS_FOREIGN` \n\t right outer join `information_schema`.`INDEX_STATISTICS` as `INDEX_STATISTICS` \non 2\n\n \t\n\t natural left outer join `information_schema`.`ROUTINES` as `ROUTINES` \n \t\n\t inner join `test`.`t0004` as `t0004` \non ( 1 )\n \n \ngroup by \n\tROUTINES.DEFINER desc",
length = 351
},
cs = 0x1309180 <my_charset_latin1>
}
(gdb)
I have many core files but no exact testcase yet. Rerunning queries didn't crash.
More info later.
Issue by yoshinorim
Friday May 15, 2015 at 21:49 GMT
Originally opened as MySQLOnRocksDB#65
We can get these from table statistics but will still be very useful if exposing as status variables. Calculating and getting these stats need to be fast enough.
Issue by yoshinorim
Friday Apr 24, 2015 at 22:43 GMT
Originally opened as MySQLOnRocksDB#58
create table t (id int primary key ) engine=rocksdb;
insert into t values (1), (2), (3);
$ rm /data/test/t.frm
flush tables;
create table t (id int primary key, value int ) engine=rocksdb;
=> Succeeded
select * from t;
=> Empty Set
MyRocks needs to be more robust so that second create table fails.
Issue by maykov
Tuesday Feb 03, 2015 at 16:43 GMT
Originally opened as MySQLOnRocksDB#30
Issue by maykov
Tuesday Feb 03, 2015 at 16:42 GMT
Originally opened as MySQLOnRocksDB#29
Issue by yoshinorim
Thursday Aug 27, 2015 at 02:28 GMT
Originally opened as MySQLOnRocksDB#102
ldb and sst_dump are at least needed for day-to-day production engineering work.
Issue by jonahcohen
Wednesday Jan 07, 2015 at 22:57 GMT
Originally opened as MySQLOnRocksDB#4
From @mdcallag:
For background see http://dev.mysql.com/doc/refman/5.6/en/show-engine.html
We need to change what is in SHOW ENGINE ROCKSDB STATUS. Right now it has live sst files which is a huge list with leveled compaction. For now I prefer to have it list the output from compaction stats. That probably needs to use one of:
db->GetProperty("rocksdb.stats", ...
db->GetProperty("rocksdb.cfstats", ...
*************************** 1. row ***************************
Type: ROCKSDB
Name: live_files
Status: cf=default name=/4908814.sst size=97853952
cf=default name=/4908812.sst size=97879865
cf=default name=/4908807.sst size=97833748
cf=default name=/4905498.sst size=1865749
cf=default name=/4905500.sst size=2670668
Issue by yoshinorim
Wednesday Jan 14, 2015 at 01:48 GMT
Originally opened as MySQLOnRocksDB#21
This is about implementing read-free slave, similar to what TokuDB has recently implemented.
The idea is that one can process RBR events without making Get() calls or scans. They have sufficient information to issue Put/Delete calls.
RocksDB SE will support RBR only so RBR restriction is not a problem.
Issue by BohuTANG
Thursday Jun 04, 2015 at 11:20 GMT
Originally opened as MySQLOnRocksDB#80
From our benchmarks under the same datasets for MyRocks/InnoDB/TokuDB, data sizes are:
MyRocks: 43GB ( the ./rocksdb dir)
InnoDB: 33GB (without compress)
TokuDB: 15GB (compression is zlib, and compress-ratio is 2, so the raw abous 30GB)
All configuration of MyRocks is in defaults, the 'show engine rocksdb status' as follows:
mysql> show engine rocksdb status\G;
*************************** 1. row ***************************
Type: DBSTATS
Name: rocksdb
Status:
** DB Stats **
Uptime(secs): 79985.6 total, 1704.4 interval
Cumulative writes: 54K writes, 280M keys, 54K batches, 1.0 writes per batch, ingest: 27.78 GB, 0.36 MB/s
Cumulative WAL: 54K writes, 54K syncs, 1.00 writes per sync, written: 27.78 GB, 0.36 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 batches, 0.0 writes per batch, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
*************************** 2. row ***************************
Type: CF_COMPACTION
Name: __system__
Status:
** Compaction Stats [__system__] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt) KeyIn KeyDrop
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 0/0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.3 0 222 0.002 0 0 0
L1 1/0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 12.8 6.1 0 56 0.004 0 110K 110K
Sum 1/0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.9 4.2 4.2 1 278 0.002 0 110K 110K
Int 0/0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 0
Flush(GB): cumulative 0.002, interval 0.000
Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard
*************************** 3. row ***************************
Type: CF_COMPACTION
Name: default
Status:
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt) KeyIn KeyDrop
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 2/0 1 0.5 0.0 0.0 0.0 27.7 27.7 0.0 0.0 0.0 62.3 456 7841 0.058 0 0 0
L1 8/0 8 0.8 30.8 27.8 3.0 30.7 27.7 0.0 1.1 41.8 41.7 753 3239 0.233 0 280M 0
L2 68/0 99 1.0 0.0 0.0 0.0 0.0 0.0 27.7 0.0 0.0 0.0 0 0 0.000 0 0 0
L3 543/0 998 1.0 0.0 0.0 0.0 0.0 0.0 27.7 0.0 0.0 0.0 0 0 0.000 0 0 0
L4 5078/0 9998 1.0 4.7 3.2 1.5 4.5 3.1 24.5 1.4 30.1 29.3 158 786 0.202 0 57M 669K
L5 15910/0 31843 0.3 45.9 24.3 21.5 45.3 23.7 3.3 1.9 19.4 19.1 2427 4600 0.528 0 252M 1024K
Sum 21609/0 42947 0.0 81.3 55.3 26.0 108.2 82.2 83.1 3.9 21.9 29.2 3794 16466 0.230 0 589M 1693K
Int 0/0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 0
Flush(GB): cumulative 27.749, interval 0.000
Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard
3 rows in set (0.00 sec)
and
mysql> select * from ROCKSDB_CF_OPTIONS where value like '%snappy%'\G;
*************************** 1. row ***************************
CF_NAME: __system__
OPTION_TYPE: COMPRESSION_TYPE
VALUE: kSnappyCompression
*************************** 2. row ***************************
CF_NAME: default
OPTION_TYPE: COMPRESSION_TYPE
VALUE: kSnappyCompression
2 rows in set (0.00 sec)
ERROR:
No query specified
Running innochecksum on key_block_size=16 compressed table results in
InnoDB offline file checksum utility.
Table is uncompressed
Page size is 16384
Fail; page 0 invalid (fails log sequence number check)
This is a regression from the same patch in Facebook 5.1 branch and is caused by 5bd4269 assuming that table is compressed iff physical page size == logical page size.
I'll submit a PR shortly
Issue by yoshinorim
Monday Jul 27, 2015 at 14:49 GMT
Originally opened as MySQLOnRocksDB#93
Here are sizes of loading LinkBench data with 1.5B max ids.
Without row checksum: 570959236
With row checksum enabled: 705439184
23.6% space increase is too much. This was because current row checksum adds 9 bytes (1B + CRC32 key and CRC32 value) into value for each index entry.
How about doing some optimizations like below?
FbsonJsonParser does not decode JSON string escape sequences, with the exception of \
and "
. The JSON standard specifies the following sequences that should be unescaped when parsing:
\"
\\
\/
\b
\f
\n
\r
\t
\uXXXX
, where XXXX is a hexadecimal Unicode code point.In addition, FbsonToJson does not escape the characters that must be escaped when serializing:
"
\
Here is an example that does not parse correctly:
{"name": "Hello, \ud83c!"}
Issue by yoshinorim
Monday Jun 15, 2015 at 17:37 GMT
Originally opened as MySQLOnRocksDB#83
This is rather for debugging purposes, but printing data dictionary via information_schema is often very useful.
Issue by yoshinorim
Wednesday May 06, 2015 at 00:48 GMT
Originally opened as MySQLOnRocksDB#63
Figure out how realistic it is to create a hidden auto_increment primary key, if not specifying primary key in DDL, instead of returning an error.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.