Coder Social home page Coder Social logo

isa-l's Introduction

Intel(R) Intelligent Storage Acceleration Library

Continuous Integration Package on conda-forge Coverity Status OpenSSF Scorecard

ISA-L is a collection of optimized low-level functions targeting storage applications. ISA-L includes:

  • Erasure codes - Fast block Reed-Solomon type erasure codes for any encode/decode matrix in GF(2^8).
  • CRC - Fast implementations of cyclic redundancy check. Six different polynomials supported.
    • iscsi32, ieee32, t10dif, ecma64, iso64, jones64, rocksoft64.
  • Raid - calculate and operate on XOR and P+Q parity found in common RAID implementations.
  • Compression - Fast deflate-compatible data compression.
  • De-compression - Fast inflate-compatible data compression.
  • igzip - A command line application like gzip, accelerated with ISA-L.

Also see:

Building ISA-L

Prerequisites

  • Make: GNU 'make' or 'nmake' (Windows).
  • Optional: Building with autotools requires autoconf/automake/libtool packages.
  • Optional: Manual generation requires help2man package.

x86_64:

  • Assembler: nasm. Version 2.15 or later suggested (other versions of nasm and yasm may build but with limited function support).
  • Compiler: gcc, clang, icc or VC compiler.

aarch64:

  • Assembler: gas v2.24 or later.
  • Compiler: gcc v4.7 or later.

other:

  • Compiler: Portable base functions are available that build with most C compilers.

Autotools

To build and install the library with autotools it is usually sufficient to run:

./autogen.sh
./configure
make
sudo make install

Makefile

To use a standard makefile run:

make -f Makefile.unx

Windows

On Windows use nmake to build dll and static lib:

nmake -f Makefile.nmake

or see details on setting up environment here.

Other make targets

Other targets include:

  • make check : create and run tests
  • make tests : create additional unit tests
  • make perfs : create included performance tests
  • make ex : build examples
  • make other : build other utilities such as compression file tests
  • make doc : build API manual

DLL Injection Attack

Problem

The Windows OS has an insecure predefined search order and set of defaults when trying to locate a resource. If the resource location is not specified by the software, an attacker need only place a malicious version in one of the locations Windows will search, and it will be loaded instead. Although this weakness can occur with any resource, it is especially common with DLL files.

Solutions

Applications using libisal DLL library may need to apply one of the solutions to prevent from DLL injection attack.

Two solutions are available:

  • Using a Fully Qualified Path is the most secure way to load a DLL
  • Signature verification of the DLL

Resources and Solution Details

isa-l's People

Contributors

bsbernd avatar cielavenir avatar cyb70289 avatar danielverkamp avatar dong-liuliu avatar ellert avatar gbtucker avatar hjl-tools avatar iamhumanbeing avatar junhe77 avatar liuqinfei avatar mdcornu avatar onovy avatar optimistyzy avatar orbea avatar pablodelara avatar rhpvorderman avatar rjoursler avatar rosinl avatar samuel-lee-msft avatar seth5141 avatar shark64 avatar surendarchandra avatar tkanteck avatar wanghailiangx avatar xjjx avatar yuhaoth avatar zbjornson avatar zhaimo15 avatar zjd87 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

isa-l's Issues

don't we need "vzeroupper" after call avx codes?

This instruction is recommended when transitioning between AVX and legacy SSE code - it will eliminate performance penalties caused by false dependencies.

but I can't find vzeroupper anywhere

maybe I should do vzeroupper myself?

Wired bug in adler32_sse

I'm trying to port adler32_sse code into Golang.
But I think I've found a wired bug while I ran Golang testcases which golden length is above 5521

{0x211297c8, strings.Repeat("\xff", 5548) + "8"},
	{0xbaa198c8, strings.Repeat("\xff", 5549) + "9"},
	{0x553499be, strings.Repeat("\xff", 5550) + "0"},
	{0xf0c19abe, strings.Repeat("\xff", 5551) + "1"},
	{0x8d5c9bbe, strings.Repeat("\xff", 5552) + "2"},
	{0x2af69cbe, strings.Repeat("\xff", 5553) + "3"},
	{0xc9809dbe, strings.Repeat("\xff", 5554) + "4"},
	{0x69189ebe, strings.Repeat("\xff", 5555) + "5"},

In
https://github.com/01org/isa-l/blob/master/igzip/adler32_sse.asm#L145

.skip_loop_1a:
add	end, 7
test	s, 7
jnz	.do_final

It's should be goto do_final when s is below than 7 , otherwise any string longer than LIMIT(0d5521) won't check trailing string which index is beyond LIMIT.

erasure code are much slower on a mac

on my macbook. the erasure_code_perf is not so stable. sometimes it's fast, sometimes it's so slow.
for example, 10+4(16MB per shards),it will be 2753-4434( I use 10*16MB/loops/time_cost).

on linux, it's stable and fast. 4300MB/s on a linux vm on my macbook.

all the gf_* 's performance on mac are not as good as linux. But other's performance is a little slower than mac, like crc, raid

I can't figure out why? Maybe you have some issue?

Build configure err

Generated Makefile contains nasm -f elf64 when configure with --host=x86_64-w64-mingw32 on linux platform.

There are two questions.

  1. The format should be win64.
  2. Only yasm can be cross-compiled to win64.

Build fails with clang on OS X El Capitain

When trying to compile the library, I'm having several problems:

  • in autogen.sh gcc -print-multi-os-directory fails because on my system, gcc is a link to clang. No big deal, autoreconf still runs

.

EDIT: The following issues are no longer relevant, check 3rd post for correct configure statement to fix them

  • When linking, a library is build, but the assembler code does not get included. The message is:

ld: warning: ignoring file erasure_code/.libs/gf_vect_mul_avx.o, file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x86_64): erasure_code/.libs/gf_vect_mul_avx.o

This happens because configure sets yasm_args = -f elf64 in the Makefile. It should be -f macho64

  • Manually changing the above yasm_args leads to a new error on linking:

ld: illegal text reloc in 'gf_vect_mul_dispatch_init' to 'gf_vect_mul_base' for architecture x86_64

I've tried to solve this with -fpic and -Wl,-read_only_relocs,suppress, but both seem to have no effect on x86_64. I'm using yasm 1.3, isa-l 2.16 and clang apple 8.0.0 . Any comments?

erasure_code/gf_vect_mad_test fails with avx2

If I change gf_vect_mad_test to test the avx2 implementation, like so

#ifndef FUNCTION_UNDER_TEST                                                                                                                                                                                                                
# define FUNCTION_UNDER_TEST gf_vect_mad_avx2                                                                                                                                                                                              
# define REF_FUNCTION gf_vect_dot_prod                                                                                                                                                                                                     
# define VECT 1                                                                                                                                                                                                                            
#endif

I get the following error:

Fail rand gf_vect_mad_avx2 test0 size=16
 40 a2 e2 15 38 32 96 af 18 f7 12 de 5c 8c 39  1

dprod_base: 1d 57 49 fd 6e 87 24 9f 67 79 5c 73 e7 a3 cc 98 be
dprod_dut:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 be

The tests pass if I use avx or sse.

This is broken on all the variants of gf_Nvect_mad_avx2 that I have tested.

Export Adler-32

It'd be nice to be able to use the fast Adler-32 implementations in this library. If you'd accept it, I'm happy to work on a PR if you provide some guidance on where you'd want it (a new checksum.h, igzip_lib.h, other). (Or if you'd rather do it yourself, also fine.)

Thank you!

(PS: I could see checksum.h being a useful starting point for adding Intel's fast Fletcher checksums in the future also.)

isal_inflate() errors with large (>4 GB uncompressed) blocks

Hi -

I consistently get ISAL_INVALID_LOOKBACK results from isal_inflate() at a particular location in a large gzip. The file inflates just fine with command line gzip, so I don't believe that the data is actually bad. In debugging, I discovered that the error occurs after state->total_out passes 2^32 and rolls back over to zero.

From what I can tell, the culprit is in decode_huffman_block_stateless, where there's a check to ensure that the loopback distance doesn't exceed the value of total_out:
https://github.com/01org/isa-l/blob/7da82d555f1d25fdd4c1e7afd849ed2d9fb7ea1d/igzip/igzip_inflate.c#L1614-L1615 But since total_out has rolled over, this check is failing on valid data.

Commenting out this check and the corresponding lines in igzip_decode_block_stateless.asm: https://github.com/01org/isa-l/blob/7da82d555f1d25fdd4c1e7afd849ed2d9fb7ea1d/igzip/igzip_decode_block_stateless.asm#L587-L588
fixed the issue for me, but I'm not sure that I understand the code well enough to be confident that there would be no further side effects

Compilation fails with gcc: error: unrecognized command line option '-V'

Compilation is terminated with following error message during configuration phase:
configure:3372: gcc -V >&5
gcc: error: unrecognized command line option '-V'
gcc: fatal error: no input files
compilation terminated.

I am using isa-l version: v2.26
Installed gcc version : gcc-4.8.5-36.el7_6.1.x86_64
RHEL 7.5 x86

lrc support

do we have plan to support lrc erasure codes?

bl_count[0] uninitialized ,

erasure_code_test can not support specific case

if k=9,m=18, and column in (0, 1, 2, 5, 7, 9, 10, 14, 16) corrupt, isa cannot decode
when execute example, report as follows:
erasure_code_test: 127x8192 BAD MATRIX
Fail to gf_gen_decode_matrix

compilation error: ./tools/yasm-filter.sh: Permission denied

CC igzip/igzip_inflate.lo
CC erasure_code/ec_highlevel_func.lo
MKTMP erasure_code/gf_vect_mul_sse.s
CCAS erasure_code/gf_vect_mul_sse.lo
./libtool: line 1111: ./tools/yasm-filter.sh: Permission denied
make[1]: *** [erasure_code/gf_vect_mul_sse.lo] Error 1
rm erasure_code/gf_vect_mul_sse.s
make: *** [all] Error 2

Does anyone know how to solve this problem?

Thanks

Porting build system to CMake

ITNOA

Hi,

I think it is very good choice to port build system to popular cross platform build system who call is CMake, cause to remove platform specific build system and increasing maintainability and usage.

Error when running "make" on Ubuntu 16.04

With a prior install of NASM, I'm getting an error when running "make". Any input on this?

devops@controller:~/ISA-L/isa-l$ make
make --no-print-directory all-am
MKTMP erasure_code/gf_vect_mul_sse.s
CCAS erasure_code/gf_vect_mul_sse.lo
erasure_code/gf_vect_mul_sse.s:119: warning: absolute address can not be RIP-relative
erasure_code/gf_vect_mul_sse.s:161: warning: label endproc_frame' defined on pass two erasure_code/gf_vect_mul_sse.s:170: warning: label slversion' defined on pass two
erasure_code/gf_vect_mul_sse.s:170: error: parser: instruction expected
Makefile:2367: recipe for target 'erasure_code/gf_vect_mul_sse.lo' failed
make[1]: *** [erasure_code/gf_vect_mul_sse.lo] Error 1
rm erasure_code/gf_vect_mul_sse.s
Makefile:1496: recipe for target 'all' failed
make: *** [all] Error 2

Clarify gf_gen_rs_matrix documentation

In the PDF documentation for gf_gen_rs_matrix, there is a list of inequalities which will guarantee that every submatrix of the result will be invertible. However, the inequalities are rendered in such a way that they all kind of run together. Maybe there are some missing commas? Or maybe the inequalities come in pairs? It's really unclear. Could you please clarify it?

check_format.sh fail base on indent 2.2.12

./tools/check_format.sh(indent) show below messages.
After downgrade to 2.2.11 . It can pass .
Fedora-29 and Debian-9 has upgrade indent to 2.2.12 .
I am not sure how to fix that .

File found with formatting issues: crc/crc32_funcs_test.c
  File found with formatting issues: crc/crc64_funcs_perf.c
  File found with formatting issues: crc/crc64_funcs_test.c
  File found with formatting issues: igzip/checksum32_funcs_test.c
  File found with formatting issues: igzip/huff_codes.c
  File found with formatting issues: igzip/hufftables_c.c
  File found with formatting issues: igzip/igzip.c
  File found with formatting issues: igzip/igzip_inflate.c
  File found with formatting issues: programs/igzip_cli.c

Feature request: add ec_check

ISA-L's raid module includes the parity check functions xor_check and pq_check for verifying parity. But there's no equivalent in the erasure code module. Could we please have a general parity check function like int ec_check(int len, int k, int rows, unsigned char *gftbls, unsigned char **data)? Or perhaps a pair of functions, one that takes precalculates a parity check matrix, and one that uses it to verify parity?

Tests are failing on arm64

Hi,

I'm trying to build for arm64 and tests are failing:

./build-aux/test-driver: line 107: 15636 Illegal instruction     "$@" > $log_file 2>&1
FAIL: crc/crc16_t10dif_copy_test
./build-aux/test-driver: line 107: 15658 Illegal instruction     "$@" > $log_file 2>&1
FAIL: crc/crc64_funcs_test
./build-aux/test-driver: line 107: 15680 Illegal instruction     "$@" > $log_file 2>&1
FAIL: crc/crc32_funcs_test
./build-aux/test-driver: line 107: 15702 Illegal instruction     "$@" > $log_file 2>&1
FAIL: igzip/igzip_rand_test
./build-aux/test-driver: line 107: 15724 Illegal instruction     "$@" > $log_file 2>&1
FAIL: igzip/igzip_wrapper_hdr_test

See: https://buildd.debian.org/status/fetch.php?pkg=libisal&arch=arm64&ver=2.27.0-1&stamp=1562601961&raw=0

Thanks for help.

LARGE_WINDOW 32kb windows

Hello!

I am configuring the project with autogen.sh && ./configure CFLAGS=-DLARGE_WINDOW to obtain 32kB deflate windows instead of default 8k. I am setting .flush = SYNC_FLUSH in the compression context after calling isal_deflate_init().

The compression rate is virtually unchanged between 8k and 32k windows. The speed doesn't change at all. This prompts me to think about LARGE_WINDOW is not implemented properly, or the flag is not visible all the way through to the assembly code. However, I modified the options.asm manually to switch LARGE_WINDOW option on, and recompiled β€”Β to no avail. Here's my table on my large text data file I am using:

Plain ZLib (32kB) ISA-L (8kB) ISA-L (32kB)
Compression rate 1x 5.68x 2.37x 2.49x

It is important to consider is that I am using a .flush = SYNC_FLUSH setting on blocks of about 5kB each at a time. Had it worked like Z_FULL_FLUSH (as in, discard the history window), it could lead to the observed behavior.

Would you please clarify whether this absense of a compression rate change is something I should expect after enabling LARGE_WINDOW.

Feature request: Add gf_gen_decode_matrix to the public API

ISA-L's public API provides no help to the user for generating a decode matrix for reconstructing missing data. Practically every single user of the erasure code module will need to do that. In fact, ISA-L's own test and benchmark programs all do this, copying similar code into 8 separate files. Could we please have a common helper function for it? As an example of such a function, see Jerasure's jerasure_make_decoding_matrix.

Scalability issue for small chunk size

When the data chunk size is small, and the number of data chunks is small (for example, 3 data chunk, 1 parity chunk, chunk size 128B), the performance is really bad, what's worse is that when using in multithreading, the overall aggregated performance is worse than single thread.

Here are the results using aws c5.2xlarge instance (AVX512 support)

chunk size = 128 B, single thread throughput is 1223 MB/s, while multithreading (8 threads) gives around 720 MB/s (aggregated throughput), multiprocessing in this case gives close to 4000 MB/s.

chunk size = 1024 B, single thread throughput is 6715 MB/s, while multithreading gives around 6000 MB/s (aggregated throughput).

Is it because there are some internal state/lock that prevent multithreading to scale?

corrupted fragment on decode

Hello guys,

I found it seems isa-l can return corrupted data when trying decode from encoded data. In my experiment, that happens when using number of parities >= 5 and feeding a few combinations of the subset of the encoded data triggers the behavior.

For example, when trying to encode data set with 10 data and 10 parities, the encoded result seems correct. The data array includes the original data and parities are modified (calculated from the matrix).

And then, feeding the subset of the data and parities with
data[0], data[1], data[2], data[3], data[4], data[6], data[7], parity[0], parity[2], parity[5]
the decoded data is different from original data.

My testing code is:
https://gist.github.com/bloodeagle40234/f5b5c39e93cd79136e33da8a5b50fdc7

I'm not sure if my code is correct or not but at least the subset
data[0], data[1], data[2], data[3], data[4], data[6], data[7], parity[0], parity[2], parity[4]
returned correct data with the original.

Could you make sure if a bug exits or correct my testing code?

Thanks.

Deflate level 0 compresses data

In zlib, level 0 "gives no compression at all (the input data is simply copied a block at a time)." In isa-l, level 0 is close zlib's level 1 as far as compression ratios. Is this intentional?

It's straightforward to add a level 0 to my own code since it's just the data with the header and checksum, but I wanted to make sure something wasn't amiss since level 0 does appear to be special in some ways in isa-l. For example, its level buf size is 0.

Thanks! :)

igzip: A small question about igzip

@gbtucker
There is a function in igzip/huffman.h, named: static inline uint32_t compute_hash(uint32_t data)
When enable and disable the macro: SSE4_2, the result of hash value is different.
So, can I just replace this function with any crc32 algorithm? thanks.

182 static inline uint32_t compute_hash(uint32_t data)
183 {
184 #ifdef SSE4_2
185
186 return _mm_crc32_u32(0, data);
187
188 #else
189 uint64_t hash;
190 /* Use multiplication to create a hash, 0xBDD06057 is a prime number */
191 hash = data;
192 hash *= 0xB2D06057;
193 hash >>= 16;
194 hash = 0xB2D06057;
195 hash >>= 16;
196
197 return hash;
198
199 #endif /
SSE4_2 */
200 }

crc32_iscsi_01 touches memory when the buffer length is zero

crc32_iscsi_01.asm, line 408: mov rbx, qword [bufptmp]

When the buffer length is zero (and the buffer address is 64-bit aligned), the above line reads a qword at the buffer address, which is technically reading past the end of the buffer. In particular, calling the function with (NULL, 0, crc) results in a segfault.

why did you say vandermonde matrix 'does not guarantee invertable for every sub matrix'

/**

  • @brief Generate a matrix of coefficients to be used for encoding.
  • Vandermonde matrix example of encoding coefficients where high portion of
  • matrix is identity matrix I and lower portion is constructed as 2^{i*(j-k+1)}
  • i:{0,k-1} j:{k,m-1}. Commonly used method for choosing coefficients in
  • erasure encoding but does not guarantee invertable for every sub matrix. For
  • large k it is possible to find cases where the decode matrix chosen from
  • sources and parity not in erasure are not invertable. Users may want to
  • adjust for k > 5.
  • @param a [mxk] array to hold coefficients
  • @param m number of rows in matrix corresponding to srcs + parity.
  • @param k number of columns in matrix corresponding to srcs.
  • @returns none
    */

void gf_gen_rs_matrix(unsigned char *a, int m, int k);

nasm-2.12.03rc1 warnings

While trying to compile with nasm-2.12.03rc1 on CentOS 7:

  CCAS     igzip/crc_utils_04.lo
igzip/crc_utils_04.s:112: warning: absolute address can not be RIP-relative
igzip/crc_utils_04.s:134: warning: absolute address can not be RIP-relative
igzip/crc_utils_04.s:153: warning: absolute address can not be RIP-relative
igzip/crc_utils_04.s:159: warning: absolute address can not be RIP-relative
igzip/crc_utils_04.s:164: warning: absolute address can not be RIP-relative

Is this OK or does this indicate a deeper problem?

NASM Compile Error: invalid combination of opcode and operands

Getting the following errors with the latest builds on nasm v2.11.08 :
igzip/igzip_gen_icf_map_lh1_06.s:284: error: invalid combination of opcode and operands
igzip/igzip_gen_icf_map_lh1_06.s:371: error: invalid combination of opcode and operands

Build fails on Mac OS/X Sierra

In file included from igzip/igzip_stateless_base.c:3:
igzip/huffman.h:144:9: error: always_inline function '_mm_crc32_u32' requires target feature 'sse4.2', but would be
inlined into function 'compute_hash' that is compiled without support for 'sse4.2'
return _mm_crc32_u32(0, data);
^
1 error generated.

crc16_t10dif_copy forward and reverse may be necessary if dst and src are overlapped.

Hi,

I'm developing DIF insertion and strip feature by stream fashion by using crc16_t10dif_copy in SPDK project.

The idea is that when we use sockets, incoming data can be split into multiple packets, and we read those packets sequentially into a data buffer. I want to insert DIF into the data buffer without using any bounce buffer. I also want to strip DIF from data buffer without any bounce buffer.

However when I use crc16_t10dif_copy, DIF insertion destroyed original data because src data is overwritten before calculating CRC. On the other hand, DIF strip worked.

So, I think that crc16_t10dif_copy_reverse will be necessary for DIF insertion and crc16_t10dif_copy_forward will be necessary for DIF strip.

Does ISA-L crc16_t10dif_copy cover such a overlapping case? I think if we emulate hardware behavior, we will have to cover such a overlapping case.

I will do workaround by using not crc_t10dif_copy but crc_t10dif + memmove for now. (we can't use memcpy because some tool complains about overlapping when we use memcpy.)

Any feedback is very appreciated.

Thanks,
Shuhei

Facing JVM corruption while enable isa-l

I have build a hadoop package with isa-l enabled, using this command

mvn clean package -Pdist -Pnative -DskipTests -Dmaven.javadoc.skip=true -Dtar -Dcontainer-executor.conf.dir=/etc/yarn-executor/ -Drequire.snappy -Dsnappy.prefix=/data0/snappy/ -Drequire.isal=true -Disal.prefix=/usr/include -Disal.lib=/usr/lib64/ -Dbundle.isal=true

And checknative success with isa-l version 2.0.25

Native library checking:
hadoop:  true /software/servers/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /lib64/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /lib64/libcrypto.so
ISA-L:   true /software/servers/hadoop/lib/native/libisal.so.2

But when I started to do some ec convert, something went wrong

[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # A fatal error has been detected by the Java Runtime Environment:
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #  SIGSEGV (0xb) at pc=0x00007f318c0bdc4c, pid=42505, tid=0x00007f318cc49700
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # JRE version: OpenJDK Runtime Environment (8.0) (build 1.8.0-internal-root_2019_03_13_15_59-b00)
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # Java VM: OpenJDK 64-Bit Server VM (25.71-b00 mixed mode linux-amd64 compressed oops)
[2019-03-19T17:57:25.672+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # Problematic frame:
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # C  [libc.so.6+0x7fc4c]  cfree+0x1c
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : #
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # An error report file with more information is saved as:
[2019-03-19T17:57:25.673+08:00] [INFO] [1552989251478_CONVERT_CMD/test/z/257m(isLogger)] : 1552989251478_CONVERT_CMD/test/z/257m(isLogger) : # /software/servers/hadoop/hs_err_pid42505.log

Here is the hs log
hs_err_pid42505.log

Any suggestions? Thanks

Release 2.20 breaks ABI

+++ b/symbols
@@ -324,11 +324,7 @@ libisal.so.2:pq_gen_slver_00030128
libisal.so.2:pq_gen_sse
libisal.so.2:pq_gen_sse_slver
libisal.so.2:pq_gen_sse_slver_00090032
-libisal.so.2:read_header
-libisal.so.2:read_header_stateful
libisal.so.2:set_huff_codes
-libisal.so.2:write_header
-libisal.so.2:write_trailer
libisal.so.2:xor_check
libisal.so.2:xor_check_base
libisal.so.2:xor_check_slver

it's dropping symbols... this looks like a bug...

support `make dist` and `make distcheck` - make release tarballs

Please consider making "release tarballs" and publishing them with every release.

The canonical way of making these is using make dist. Once that creates a packaged $project-$version.tar.$ext tarball, you can upload these to github by going to https://github.com/01org/isa-l/releases, clicking on the tag and/or edit release and then drag-and-drop your tarball release file.

Currently, make dist and make distcheck fail, so it's not possible to create autotools release files. The error is:

$ make distcheck
make --no-print-directory dist-xz am__post_remove_distdir='@:'
Building isa-l.h
make[1]: *** No rule to make target 'About_bsd.txt', needed by 'distdir'.  Stop.
Makefile:2024: recipe for target 'dist' failed
make: *** [dist] Error 2

I assume it's because the file mentioned was not added to git, and thus this fails.

Erasure code support for PowerPC

Hi!

I've been working on accelerating erasure codes on 64-bit PowerPC systems.

I've found that ISA-L has significantly lower overhead than jerasure/gf-complete due to ISA-L's performance-orientated design. As such, I've written a bunch of PowerPC assembler erasure-code routines for ISA-L. They are largely self-contained and require little change to the generic C code. There's no impact to performance or functionality on x86.

If I were to submit these as a pull-request, would you be open to accepting them?

Just to be clear, I'm not asking you to accept the code sight unseen, I'm just asking for an "in principle" answer on whether you'd be OK with non-x86 code or if you'd reject it out-of-hand.

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.