Coder Social home page Coder Social logo

cskenji / innodb-tools Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 0.0 2.01 MB

Automatically exported from code.google.com/p/innodb-tools

Makefile 0.09% C 70.70% Perl 0.38% C++ 16.74% Objective-C 1.30% Shell 10.33% CMake 0.26% Assembly 0.04% Groff 0.16%

innodb-tools's People

Watchers

 avatar

innodb-tools's Issues

New (compact) DECIMAL format support at create_defs.pl and print_data.c

I needed some changes to treat DECIMAL by innodb-recovery-0.3.tar.gz

In the output of create_defs.pl,
I think the (NEW)DECIMAL field has not enough information.

 1. type: FT_CHAR  should be  FT_DECIMAL

 2. decimal_precision: and decimal_digits:


In addition, the space padded output may be verbose in some cases.
So, I have changed print_data.c as,
    bin2decimal((char*)value, &dec, field->decimal_precision,
field->decimal_digits);
//    decimal2string(&dec, string_buf, &len, field->decimal_precision,
field->decimal_digits, ' ');
    decimal2string(&dec, string_buf, &len, 0,0,0);
    print_string(string_buf, len, field);

Best regards,
Yasufumi

Original issue reported on code.google.com by [email protected] on 16 Dec 2008 at 4:58

"Unsupported type: BLOB" in create_defs.pl

What steps will reproduce the problem?

Run create_defs.pl on a database with tables that have any BLOB type.

What version of the product are you using? On what operating system?
Version innodb-recovery-0.3

The fix seemed simple enough. I added BLOB support to create_defs.pl. It
runs OK and generates a table_defs.h file with BLOBs defined. This
table_defs.h header compiles OK with tables_dict.c. After doing all this I
was able to run page_parser and constraints_parser on a database with many
BLOB fields... What I don't know is if actually works end-to-end. I had
various other problems with my corrupt ibdata1 file and I was unable to
recover any useful data. So, while my patch may product running code, I am
not sure if my fix actually works. But it seems straightforward enough for
me to let you decide if it should work or not.

I have enclosed a patch file and I have tried to include it inline here: 

<pre>
# diff -c innodb-recovery-0.3/create_defs.pl
innodb-recovery-0.3.noah/create_defs.pl 
*** innodb-recovery-0.3/create_defs.pl  2008-04-01 18:58:03.000000000 -0700
--- innodb-recovery-0.3.noah/create_defs.pl     2008-08-01
02:37:07.000000000 -0700
***************
*** 363,367 ****
--- 363,383 ----
                return { type => 'FT_CHAR', fixed_len => $len_bytes };
        }

+       if ($type =~ /^TINYBLOB$/i) {
+               return { type => 'FT_BLOB', min_len => 0, max_len => 255 };
+       }
+ 
+       if ($type =~ /^BLOB$/i) {
+               return { type => 'FT_BLOB', min_len => 0, max_len => 65535 };
+       }
+ 
+       if ($type =~ /^MEDIUMBLOB$/i) {
+               return { type => 'FT_BLOB', min_len => 0, max_len =>
16777215 };
+       }
+ 
+       if ($type =~ /^LONGBLOB$/i) {
+               return { type => 'FT_BLOB', min_len => 0, max_len =>
4294967295 };
+       }
+ 
        die "Unsupported type: $type!\n";
  }
</pre>

Original issue reported on code.google.com by noah%[email protected] on 4 Aug 2008 at 6:54

Attachments:

sql export

What steps will reproduce the problem?
1. How I import the pages, it's not SQL
   and the split_dump do nothink, it's possible to make
   an SQL_dump with this Files must be perfect ;-)
2.
3.

What is the expected output? What do you see instead?


What version of the product are you using? On what operating system?


Please provide any additional information below.




Original issue reported on code.google.com by [email protected] on 22 Jan 2008 at 9:15

Page is in REDUNDANT format while we're looking for COMPACT

What steps will reproduce the problem?
1. make
2. ./page_parser
3. ./constraints_parser with debug mode on (-V)

What is the expected output? What do you see instead?
Table data is expected as the output. However, it shows up a message that
says, "Page is in REDUNDANT format while we're looking for COMPACT - skipping"

What version of the product are you using? On what operating system?

innodb-recovery-0.3 on Fedora Core 10 x86 system.


Please provide any additional information below.
I'm trying to recover my database which was created long ago. The .frm
files are not so good, but I have a dump of the table structure and
triggers. So I created an empty database for "create_defs.pl" to pick the
table definitions. After generating the pages and then running
constraints_parser, it flashed the above message. Earlier, I ignored the
fact that newer MySQL versions create InnoDB tables with ROW_FORMAT=COMPACT
by default. Then I dropped and imported the database structure again, this
time, with ROW_FORMAT=REDUNDANT. But still, the same message has come up. I
have no clue what exactly does the parser expect my table structure to be.
Please provide a solution for this.

Original issue reported on code.google.com by [email protected] on 11 Jun 2009 at 10:18

Problem with create_defs.pl and table_defs.h files.

What steps will reproduce the problem?
1. Excute the create_defs.pl command
2. The parameters are --db=test --table=movi
3. The problem: the "table_defs.h" file always have the same values

#ifndef table_defs_h
#define table_defs_h

// Table definitions
table_def_t table_definitions[] = {
};

#endif


I execute many times de "create_defs.pl" and always have the same bad result.

What is the expected output? What do you see instead?

A "table_defs.h" file with de info of my table

What version of the product are you using? On what operating system?

innodb-recovery-0.3 on Debian Sarge 3.01

Please provide any additional information below.

My table is INNODB
I use Mysql-Server 4.0.24

Original issue reported on code.google.com by [email protected] on 18 Feb 2010 at 8:10

Download is not a bzip file, cannot unzip

What steps will reproduce the problem?
1. wget http://innodb-tools.googlecode.com/files/innodb-recovery-0.3.tar.bz2
2. bunzip2 innodb-recovery-0.3.tar.bz2
3. Message: bunzip2: innodb-recovery-0.3.tar.bz2 is not a bzip2 file.

What is the expected output? What do you see instead?
- the file should unzip


What version of the product are you using? On what operating system?
0.3 on fedora linux

Please provide any additional information below.

The file is not a bzip file


Original issue reported on code.google.com by [email protected] on 28 Apr 2008 at 1:21

create_defs.pl may incorrectly use 'unsigned' for a column depending on the columns name

What steps will reproduce the problem?
Run create_defs.pl for this table:
create table tz_bug (id bigint, wid int unsigned) engine=innodb;

This does not have the problem:
create table tz_ok (id bigint, wi int unsigned) engine=innodb;

The current regex used will make tz_bug.id use 'unsigned'

Output with the bug:
                        { /* bigint(20) */
                                name: "id",
                                type: FT_UINT,
                                fixed_length: 8,


Expected output:
                        { /* bigint(20) */
                                name: "id",
                                type: FT_INT,
                                fixed_length: 8,

Please provide any additional information below.
create_defs.pl has a bug that makes it use 'unsigned' for columns when
the column name is a substring of another column in the table and the
other column is unsigned.

This patch for IsFieldUnsigned fixes the problem.
250c250
<         return ($row->[1] =~ /$field[^,]*unsigned/i);

---
>         return ($row->[1] =~ /`$field`[^,]*unsigned/i);



Original issue reported on code.google.com by [email protected] on 8 Oct 2009 at 3:57

REC_OFFS_SQL_NULL and REC_OFFS_EXTERNAL set incorrectly on linux/x86_64

--------
What steps will reproduce the problem?
--------

Configure, compile & run constraints_parser on an x86_64 platform and no 
rows containing a field that is null will be recovered. Errors will be 
similar to:

 Invalid offset for field 12: 2147483701

--------
What is the expected output? What do you see instead?
--------

in constraints_parser.c:ibrec_init_offsets_old(), the following lines are 
executed:
 offs |= REC_OFFS_SQL_NULL; (lines 261, 279) 
and
 offs |= REC_OFFS_EXTERNAL; (line 284)


 The problem is that REC_OFFS_SQL_NULL and REC_OFFS_EXTERNAL are set to (1 
<< 31) when ulint is in fact 64 bit and should be (1 << 63) instead.


As a workaround, replacing these 3 lines with
 offs |= REC_OFFS_SQL_NULL << 32;
and
 offs |= REC_OFFS_EXTERNAL << 32;

fixes it.

(As a wild guess this may be a problem with configure, rather than the 
header files themselves)

--------
What version of the product are you using? On what operating system?
--------
uname -a

  Linux <hostname> 2.6.18-8.1.8.el5 #1 SMP Mon Jun 25 17:06:07 EDT 2007 
x86_64 x86_64 x86_64 GNU/Linux

cat /etc/redhat-release

  Red Hat Enterprise Linux Server release 5.1 (Tikanga)

Original issue reported on code.google.com by [email protected] on 22 Feb 2008 at 12:42

SET type unsupported by innodb-recovery v0.3

What steps will reproduce the problem?
1. create table with a SET type
2. run create_defs.pl on that table

What is the expected output? 
-  Unsupported type: set('not_null'...)!

What do you see instead?
- valid tables_defs.h

What version of the product are you using? On what operating system?
- innodb-recovery-0.3

Please provide any additional information below.
patch to support 'set' is about of 119 rows and i prefer not to paste it
inline. i was able to recover a table with 13-members set (about of 3M
records). though it hasn't been tested extensively.

Original issue reported on code.google.com by [email protected] on 16 Feb 2010 at 2:00

Attachments:

constraints_parser.c is not well suited for consuming data from a raw device

constraints_parser is not friendly towards input from a raw device. It
reads from the file 16kb a time. InnoDB pages are 16kb aligned when read
from a file. But when read from a raw device they are only guaranteed to be
aligned to the file system block size (4kb on Linux for me).

Many rows will be missed when this is done because constraints_parser uses
a 16kb buffer to search for rows. When this is split over 2 InnoDB pages,
rows will be missed.

One way to fix this is to use a larger read size so that rows will be still
be lost but less frequently. 

Another way to fix this is add a flag for constraints_parser to only use
pages with valid checksums. And then advance the file offset by 4kb at a
time until a valid page is found.

Original issue reported on code.google.com by [email protected] on 8 Oct 2009 at 4:09

process_ibfile doesn't handle partial reads

process_ibfile assumes that partial reads (returns less than the requested
amount of data) indicates an error. That is not an error and more likely
when reading from NFS.

------

The code from process_ibfile is:
        // Read pages to the end of file
        while ((read_bytes = read(fn, page, UNIV_PAGE_SIZE)) ==
UNIV_PAGE_SIZE) {

------

The reads should be retried until -1 is returned. From the man page:
       On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number.  It is not an
error if this number is smaller than
       the number of bytes requested; this may happen for example because
fewer bytes are actually available right now (maybe because we were close
to  end-of-file,  or  because  we  are
       reading from a pipe, or from a terminal), or because read() was
interrupted by a signal.  On error, -1 is returned, and errno is set
appropriately. In this case it is left unspec-
       ified whether the file position (if any) changes.


Original issue reported on code.google.com by [email protected] on 8 Oct 2009 at 4:03

medium int unsigned and any negative int signed is handled incorrectly

print_data has this for unsigned ints, and the case for mediumint is
incorrect. This code clears the most signficant bit. That should not be done.

        switch (field->fixed_length) {
                case 1: return mach_read_from_1(value);
                case 2: return mach_read_from_2(value);
                case 3: return mach_read_from_3(value) & 0x3FFFFFUL;

The code for signed bits does not support negative values. It clears the
most signficant bit, which is correct to do for positive values. For
negative values the bit should be flipped from 0 to 1. I think the code has
other problems for negative values from sign extension.

Something like this handled negative ints:

                        ulint ur = mach_read_from_3(value);
                        ulint b0 = ur & 0xff;
                        ulint b1 = (ur >> 8) & 0xff;
                        ulint b2 = ((ur >> 16) & 0xff) ^ 0x80;
                        ulint r = b0 + (b1 << 8) + (b2 << 16);
                        if (r > 0x7fffff) {
                                return -((0xffffff - r) + 1);
                        } else {
                                return r;
                        }






Original issue reported on code.google.com by [email protected] on 11 Oct 2009 at 10:56

varbinary datatype support

create_defs.pl does not suppoer varbinary datatype.

What definition I need use for recovery varbinary(40) data ?
..
I create definition  manually  :

                       { /* varbinary(40) */
                                name: "info_hash",
                                type: FT_BLOB,
                                min_length: 0,
                                max_length: 65535,

                                can_be_null: FALSE
                        },

but this crash conrstraint_parser : 

Initializing table definitions...
Processing table: a
 - total fields: 39
 - nullable fields: 2
 - minimum header size: 46
 - minimum rec size: 106
 - maximum rec size: 525181

Read data from fn=3...
Page id: 0
Starting offset: 6. Checking 1 table definitions.
.....
Checking offset: 51:  (a)
Checking offset: 52:  (a) ORIGIN=OK DELETED=OK OFFSETS=OK Segmentation fault


Original issue reported on code.google.com by [email protected] on 21 Mar 2009 at 10:31

oak-online-alter-table: No need for DELETE in the UPDATE-Trigger

What steps will reproduce the problem?
1. No problem just a design improvement?

What is the expected output? What do you see instead?
no difference

What version of the product are you using? On what operating system?
openark-kit-170

Please provide any additional information below.

A trigger is used:
        CREATE TRIGGER %s.%s AFTER UPDATE ON %s.%s
        FOR EACH ROW
        BEGIN
            DELETE FROM %s.%s WHERE (%s) = (%s);
            REPLACE INTO %s.%s (%s) VALUES (%s);
        END

Imho there is no need for the DELETE

Regards
Erkan


Original issue reported on code.google.com by [email protected] on 7 Mar 2011 at 4:29

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.