Coder Social home page Coder Social logo

chronicle-bytes's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chronicle-bytes's Issues

Optimised the call to Math.round()

It was shown to added latency

204	at java.lang.Double.doubleToRawLongBits(Native Method)
	at java.lang.Math.round(Math.java:717)
	at net.openhft.chronicle.bytes.ByteStringAppender.append(ByteStringAppender.java:176)
	at software.chronicle.fix44min.generators.MessageGenerator.orderQty(MessageGenerator.java:199)
	at software.chronicle.fix44min.messages.NewOrderSingle.copyTo(NewOrderSingle.java:128)
	at software.chronicle.fix44min.messages.NewOrderSingle.copyTo(NewOrderSingle.java:113)
	at software.chronicle.fix.staticcode.VanillaSessionHandler.copyToMG(VanillaSessionHandler.java:598)
	at software.chronicle.fix.staticcode.VanillaSessionHandler.sendMessage(VanillaSessionHandler.java:502)

Add java doc to net.openhft.chronicle.bytes.Bytes#wrapForRead(byte[]) stating that this creates garbage

please also add something similar ( to below ) for the follow methods:

screen shot 2018-03-12 at 13 26 13

To avoid garbage please suggest in the java doc, use something like the following ( but you will have to tailor if for each method ) :


import net.openhft.chronicle.bytes.Bytes;
import org.junit.Assert;
import org.junit.Test;

import java.nio.ByteBuffer;
import java.nio.charset.Charset;

/**
 * Created by Rob Austin
 */
public class ChronicleBytesToByteBufferTest {

    private static final Charset ISO_8859 = Charset.forName("ISO-8859-1");
    private static final String EXPECTED = "hello world";
    private static Bytes b = Bytes.elasticByteBuffer();

    @Test
    public void test() {

        byte[] byteArray = "hello world".getBytes(ISO_8859);

        b.clear();
        b.write(byteArray);
        assert b.underlyingObject() instanceof ByteBuffer;

        ByteBuffer byteBuffer = (ByteBuffer) b.underlyingObject();

        int position = byteBuffer.position();
        int limit = EXPECTED.length();

        final StringBuilder sb = new StringBuilder();

        for (int i = position; i < limit; i++) {
            sb.append((char) byteBuffer.get());
        }

        Assert.assertEquals(EXPECTED, sb.toString());

    }
}

Translate method in NativeBytesStore

Every read/write method in the NativeBytesStore class calls the translate method, usually nothing happens here except when being a MappedBytesStore. It might be better to either overwrite this method in the MappedBytesStore class and leave it empty in the NativeBytesStore class or to even go further and add an additional class between the MappedBytesStore and the NativeBytesStore. This new class (maybe called SubBytesStore) would implement all the read/write methods using the translate method and the methods in the NativeBytesStore would not check for translate at all.

ClosedByInterruptedException not handled in MappedBytes#realCapacity

Hi,
In net.openhft.chronicle.bytes.MappedBytes#realCapacity v1.9.18 ClosedByInterruptedException is catched and ignored. This cause problem with the table directory listing used in Chronicle Queue in case of thread interruption, where there is a WARN followed by an ERROR, it looks like the original InterruptedException is lost.

WARN  [Slf4jExceptionHandler$2] Unable to obtain the real size for /tmp/junit6367516383187162035/junit3970794415465406300/killConsumers/offset-default/directory-listing.cq4t
net.openhft.chronicle.core.io.IORuntimeException: java.nio.channels.ClosedByInterruptException
	at net.openhft.chronicle.bytes.MappedFile.actualSize(MappedFile.java:418)
	at net.openhft.chronicle.bytes.MappedBytes.realCapacity(MappedBytes.java:145)
	at net.openhft.chronicle.queue.impl.table.SingleTableStore.acquireValueFor(SingleTableStore.java:203)
	at net.openhft.chronicle.queue.impl.single.TableDirectoryListing.lambda$init$0(TableDirectoryListing.java:53)
	at net.openhft.chronicle.queue.impl.table.SingleTableStore.doWithExclusiveLock(SingleTableStore.java:246)
	at net.openhft.chronicle.queue.impl.single.TableDirectoryListing.init(TableDirectoryListing.java:52)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.<init>(SingleChronicleQueue.java:141)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder.build(SingleChronicleQueueBuilder.java:188)
       ...
Caused by: java.nio.channels.ClosedByInterruptException
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
	at net.openhft.chronicle.bytes.MappedFile.actualSize(MappedFile.java:415)

ERROR [AbstractCallablePool] Exception catch in pool: readCheckOffset0 failed. Offset: 0 + adding: 112 > limit: 0 (given: false)
net.openhft.chronicle.bytes.util.DecoratedBufferUnderflowException: readCheckOffset0 failed. Offset: 0 + adding: 112 > limit: 0 (given: false)
	at net.openhft.chronicle.bytes.AbstractBytes.readCheckOffset0(AbstractBytes.java:676)
	at net.openhft.chronicle.bytes.AbstractBytes.readCheckOffset(AbstractBytes.java:661)
	at net.openhft.chronicle.bytes.MappedBytes.readCheckOffset(MappedBytes.java:186)
	at net.openhft.chronicle.bytes.AbstractBytes.readOffsetPositionMoved(AbstractBytes.java:421)
	at net.openhft.chronicle.bytes.AbstractBytes.readSkip(AbstractBytes.java:249)
	at net.openhft.chronicle.bytes.AbstractBytes.readSkip(AbstractBytes.java:32)
	at net.openhft.chronicle.wire.AbstractWire.readDataHeader(AbstractWire.java:201)
	at net.openhft.chronicle.wire.WireIn.readDataHeader(WireIn.java:183)
	at net.openhft.chronicle.queue.impl.table.SingleTableStore.acquireValueFor(SingleTableStore.java:204)
	at net.openhft.chronicle.queue.impl.single.TableDirectoryListing.lambda$init$0(TableDirectoryListing.java:53)
	at net.openhft.chronicle.queue.impl.table.SingleTableStore.doWithExclusiveLock(SingleTableStore.java:246)
	at net.openhft.chronicle.queue.impl.single.TableDirectoryListing.init(TableDirectoryListing.java:52)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.<init>(SingleChronicleQueue.java:141)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder.build(SingleChronicleQueueBuilder.java:188)

I will try to create a unit test.
Regards
ben

HeapBytesStore checking wrong offset

Shouldn´t HeapBytesStore line 336 and 345 check offsetInRDO instead of offset?

In other words, boths lines should be changed from:
writeCheckOffset(offset, length);

to:
writeCheckOffset(offsetInRDO, length);

StreamingInputStream zero length read behaviour

The behaviour of StreamingInputStream no longer adheres to the InputStream contract.

From InputStream javadoc.

 * <p> If <code>len</code> is zero, then no bytes are read and
 * <code>0</code> is returned; otherwise, there is an attempt to read at
 * least one byte. If no byte is available because the stream is at end of
 * file, the value <code>-1</code> is returned; otherwise, at least one
 * byte is read and stored into <code>b</code>.

The code to StreamingInputStream was changed as a result of #13 so it now returns -1 even the end of the stream has not been reached, if len of 0 is specified.

If 0 is passed into the read of StreamingDataInput, it returns 0 and no bytes have been attempted to be read.

The right fix is probably to add some kind of if (len == 0) return 0; statement before an interaction is attempted with the underlying StreamingDataInput.

I encountered this due to it interacting with https://github.com/EsotericSoftware/kryo which I was using to serialise/deserialise objects to be written to a Chronicle Queue. I would admit it's an edge-case and it's slightly strange to request 0 bytes, but going by the Javadoc I would argue there is a bug here in StreamingInputStream.

BytesMarshallable for array fields

See

    if (type.isArray())
       throw new UnsupportedOperationException("TODO");

in net.openhft.chronicle.bytes.BytesMarshaller.FieldAccess#create

copyTo bug leads to inefficient double loop

In RandomDataInput, the default implementation has a bug where it loops twice:

    int len = (int) Math.min(bb.remaining(), readRemaining());
    int i;
    for (i = 0; i < len - 7; i += 8)
        bb.putLong(i, readLong(start() + i));
    for (i = 0; i < len; i++)
        bb.put(i, readByte(start() + i));
    return len;

I think the intention was to copy 8 bytes at a time using putLong, then copy the remaining using put. The current loop copies 8 at a time, then copies them all again 1 at a time.

NativeBytesStore.toTemporaryDirectByteBuffer() doesn't strongly reference the memory it points to

I was having issues where deserialization would fail with an out of range value here https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/fqltool/Dump.java#L84

If I change it toByteArray() it works consistently. java.nio.DirectByteBuffer has an attachment field you can set to whatever it needs to strongly reference.

I can't say with 100% certainty this is what is happening, since it is surprising to get so unlucky. It fails the first time almost every time.

read(byte[], int, int) in StreamingInputStream doesn't signal the end of the stream properly

I'm trying to use the StreamingInputStream interface and ran into an issue where reading the stream never exits. Looking at the code I see

    @Override
    public int read(byte[] b, int off, int len) throws IOException {
        try {
            return in.read(b, off, len);
        } catch (IORuntimeException e) {
            throw new IOException(e);
        }
    }

The in.read method returns the length of the data read, which is always 0 or more, but the end of the stream is supposed to be signalled by -1. The method below, read() seems to handle this properly.

BufferedUnderlowException reading a UTF8 message

TcpEventHandler -
java.nio.BufferUnderflowException
at net.openhft.chronicle.bytes.BytesInternal.parseUtf8_SB1(BytesInternal.java:352) ~[chronicle-bytes-1.7.17.jar:?]
at net.openhft.chronicle.bytes.BytesInternal.parseUtf8(BytesInternal.java:114) ~[chronicle-bytes-1.7.17.jar:?]
at net.openhft.chronicle.bytes.StreamingDataInput.parseUtf8(StreamingDataInput.java:276) ~[chronicle-bytes-1.7.17.jar:?]
at net.openhft.chronicle.wire.BinaryWire.getStringBuilder(BinaryWire.java:629) ~[chronicle-wire-1.7.14.jar:?]
at net.openhft.chronicle.wire.BinaryWire.readText(BinaryWire.java:1071) ~[chronicle-wire-1.7.14.jar:?]

Feature request: writeUTFΔTruncated

I'd like to request a feature for Bytes (really, RandomDataOutput interface).

Would it be possible to create the string write method that truncates string to fit the available space, rather than throwing an exception when the string does not fit?

Currently, if we need to avoid the exception and we have a string that may be too large, we need to call a substring() on the string we write, creating some garbage. If we have a new writeUTFΔTruncated, we'd be able to solve this situation w/o garbage.

Thank you.

Native Bytes resize bug

the log message always reports the "Resizing buffer was" and "needs" as the same value which must be wrong.

[main] WARN net.openhft.chronicle.bytes.NativeBytes - Resizing buffer was 32364 KB, needs 32364 KB, new-size 48548 KB
[main] WARN net.openhft.chronicle.bytes.NativeBytes - Resizing buffer was 48548 KB, needs 48548 KB, new-size 72824 KB
[main] WARN net.openhft.chronicle.bytes.NativeBytes - Resizing buffer was 72824 KB, needs 72824 KB, new-size 109236 KB
------- 695271
[main] WARN net.openhft.chronicle.bytes.NativeBytes - Resizing buffer was 109236 KB, needs 109236 KB, new-size 163856 KB
[main] WARN net.openhft.chronicle.bytes.NativeBytes - Resizing buffer was 163856 KB, needs 163856 KB, new-size 245784 KB

chronicle-bytes version 1.7.34

ObjectInput/OutputStream and standard Java serialization interop with Bytes/Wire improvement proposal

I have an idea how it could be improved significantly compared to what we currently do (creating a new ObjectInputStream/OutputStream on each operation):

Copy ObjectInputStream from Android Open Source Project (not GPL but Apache licensed, unlike OpenJDK) and modify it to make reusable. Very minor modifications are needed, that is expose the ability to clear the sets of encountered references and classes since the start of the stream. Also their internal buffering could be removed, because it assumes arbitrary InputStream/OutputStream as a backend, but for Bytes backend buffering is not needed.

Then cache this reusable class in ThreadLocal inside Bytes or Wire.

Can't pass tests on systems with locales having comma (",") as decimal point separator

Tests in error:
net.openhft.chronicle.bytes.BytesTest.testWriter0
Run 1: BytesTest.testWriter:605 ? InputMismatch

        @NotNull Scanner scan = new Scanner(bytes.reader());
        scan.useLocale(Locale.ENGLISH);

adding scan.useLocale(Locale.ENGLISH);
makes the test pass ok

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a8/DecimalSeparator.svg/800px-DecimalSeparator.svg.png
green countries on the map actually use this comma separator by default.

Bytes append double with specified number of digits

Bytes currently provides a method to append a long value with a specified number of digits:

default R append(long offset, long value, int digits)

but does not provide a similar

default B append(double d, int decimalPlaces, int digits)

Why is that? It would be very handy to have overloaded methods dealing with number of digits.

Keep up the excellent work!

André

WeakReferenceCleaner now removed from REFERENCE_MAP on scheduleForClean() but breaks a test

The following enhancement to stop a map growing large under load and causing jitter, causes a test monitoring this map to blow up.

net.openhft.chronicle.queue.impl.single.AppenderFileHandleLeakTest#tailerShouldReleaseFileHandlesAsQueueRolls

    public void scheduleForClean() {
        SCHEDULED_CLEAN.add(this);
        REFERENCE_MAP.remove(this); // added to stop this map blowing up.
    }

UncheckedNativeBytes does not extend NativeBytes

The UncheckedNativeBytes class does not extend the NativeBytes class making them not elastic. I was trying to get elastic unchecked Bytes with Bytes.allocateElasticDirect(capacity).unchecked(true), this way I would have good performance and still the ability to grow my buffer manually.

Is this implementation a design decision or a mistake?

Improve handling of large Strings.

The current implementation for BytesInternal fails to write if the String is longer than 1/4 of the block size of a queue.
parseUtf8_SB1 also fails to read Strings of more than 1 MB correctly.

Option to truncate MappedFile

I'm using Bytes as an amazing replacement for ByteBuffer, in order to produce large output files of a specific binary format.

Only one thing is missing right now: once I've finished writing all the data, the file still contains a bunch of zeros at the end - unused data from the last page, between writePosition and realCapacity. Those zeros are implicitly present in ElasticBytes, of course, but I need to truncate the actual file to the correct size.
For example:

out.writeLimit(out.writePosition())

This might actually go ahead and call MappedFile.fileChannel.truncate( writeLimit() )

Since the fileChannel is private, now I have to create a new Channel and truncate it instead, after releasing Bytes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.