Coder Social home page Coder Social logo

rust-zookeeper's Introduction

Build Status Coverage Status Version License

Zookeeper client written 100% in Rust

This library is intended to be equivalent with the official (low-level) ZooKeeper client which ships with the official ZK distribution.

I have plans to implement recipes and more complex Curator like logic as well, but that takes a lot of time, so pull requests are more than welcome! At the moment only PathChildrenCache is implemented.

Usage

Put this in your Cargo.toml:

[dependencies]
zookeeper = "0.8"

And this in your crate root:

extern crate zookeeper;

Examples

Check the examples directory

Feature and Bug Handling

Also if you find a bug or would like to see a feature implemented please raise an issue or send a pull-request.

Documentation

Documentation is available on the gh-pages branch.

Build and develop

cd zk-test-cluster
mvn clean package
cd ..
cargo test

Contributing

All contributions are welcome! If you need some inspiration, please take a look at the currently open issues.

rust-zookeeper's People

Contributors

behos avatar bonifaido avatar brayniac avatar cmsd2 avatar d-e-s-o avatar davidwilemski avatar drusellers avatar ekmartin avatar inphi avatar jonhoo avatar joshleeb avatar jwilm avatar nickelc avatar rgs1 avatar sakateka avatar serprex avatar sighingnow avatar szqh97 avatar tgockel avatar unlawfulmonad avatar volfco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rust-zookeeper's Issues

Implement building blocks of multi-key transactions

rust-zookeeper would needs transaction support first though (#4)? This correctly we'd need something similar to the Java client's multi method, and possibly something like the Transaction builder - which seems like a wrapper on top of multi. That would also involve creating something similar to Op, and finally, support for serializing and deserializing the multi requests and responses - similar to MultiTransactionRecord and MultiResponse.

Curator style reconnecting client

An important feature of the curator client is that it can be relied on to succeed or retry until it does.
This seems to greatly affect how some recipes are implemented.

The existing zk methods are low-level in that they try once, and either succeed or fail, passing e.g. a connectionloss back to the caller.

An example curator invocation that is tricky to map onto simple semantics like these is the create().withProtection().withMode(EPHEMERAL_SEQUENTIAL).forPath(...)
The idea is that the call will retry across reconnects if necessary.
After reconnect, it runs get_children on the created node's parent node to see if the node was actually created or not.
If not it can return an error, if it was, returns the path of the node it finds was created.
This is behaviour that cannot reasonably be left for the user of the library to implement.

I think all the basic commands should be retryable, although probably via a separate, higher-level client datatype, possibly with the builder-pattern requests.

Infinite Loop

Hello!!
When zookeeper is running the lib working good but when zookeeper is not running the lib response with a infinite loop: "ERROR:zookeeper::io: Failed to write socket: Error { repr: Os { code: 57, message: "Socket is not connected" } }"
There any way to control and finish when this error occurs?

Thanks in advance!!

Any plans on command-line utility

Hi,

I need some command-line utility to run simple zookeeper queries. I think this library is a good base for it.

The reason I don't want to use default zookeeper cli and zk-shell because they depend on java and python respectively. In rust we can make a static (or almost static) binary to run commands.

So do you want me to make a PR with some small command-line client or do you think it's better to keep it as a separate project?

rust-zookeeper can't create data with zetcd cluster

I run an etcd cluster and use zetcd to to dispatch the zookeeper operations on an etcd cluster.

zetcd --zkaddr 0.0.0.0:2181 --endpoints localhost:2379

and then I use rust-zookeeper to connect with zetcd and create data:

    // Connect to the test cluster
    let zk = ZooKeeper::connect("127.0.0.1:2181",
                                Duration::from_secs(5),
                                move |event: WatchedEvent| {
                                    info!("{:?}", event);
                                    if event.keeper_state == KeeperState::Disconnected {
                                        disconnects_watcher.fetch_add(1, Ordering::Relaxed);
                                    }
                                })
                 .unwrap();


    // Do the tests
    let create = zk.create("/test",
                           vec![8, 8],
                           Acl::open_unsafe().clone(),
                           CreateMode::Ephemeral);

it can connect to the zetcd, but when it send a create data request to zetcd, rust-zookeeper got error when parse the response:

2021-08-23T03:07:16Z ERROR zookeeper::zookeeper] error closing zookeeper connection in drop: ConnectionLoss
test zk_test ... FAILED                                                                               
                                            
failures:                                                                                             
                                                                                       
---- zk_test stdout ----
thread 'io' panicked at 'Failed to parse ConnectResponse Error { kind: UnexpectedEof, message: "failed to fill whole buffer" }', src/io.rs:225:21
stack backtrace:
   0: rust_begin_unwind
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/std/src/panicking.rs:515:5
   1: std::panicking::begin_panic_fmt                                                            
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/std/src/panicking.rs:457:5
   2: zookeeper::io::ZkIo::handle_chunk        
             at ./src/io.rs:225:21 
   3: zookeeper::io::ZkIo::handle_response
             at ./src/io.rs:164:17
   4: zookeeper::io::ZkIo::ready_zk
             at ./src/io.rs:417:21
   5: zookeeper::io::ZkIo::ready
             at ./src/io.rs:366:19
   6: zookeeper::io::ZkIo::run
             at ./src/io.rs:569:17
   7: zookeeper::zookeeper::ZooKeeper::connect::{{closure}}
             at ./src/zookeeper.rs:78:44
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

and here is zetcd's log:

I0823 10:46:59.490173  461733 server.go:91] accepted remote connection "127.0.0.1:34456"
I0823 10:46:59.490318  461733 authconn.go:50] error reading connection request (EOF)
I0823 10:46:59.490324  461733 server.go:91] accepted remote connection "127.0.0.1:34458"
I0823 10:46:59.490347  461733 authconn.go:53] auth(&{ProtocolVersion:0 LastZxidSeen:0 TimeOut:5000 SessionID:0 Passwd:[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]})
I0823 10:46:59.491225  461733 pool.go:83] authresp=&{ProtocolVersion:0 TimeOut:5000 SessionID:7587856633219205200 Passwd:[49 166 157 159 194 53 181 70 167 83 236 228 211 170 130 6]}
I0823 10:46:59.491280  461733 server.go:64] serving serial session requests on id=694d7b67dd9f4850
I0823 10:46:59.491290  461733 session.go:59] starting the session... id=7587856633219205200
I0823 10:46:59.491305  461733 server.go:73] zkreq={xid:1 req:*zetcd.CreateRequest:&{Path:/test Data:[8 8] Acl:[{Perms:31 Scheme:world ID:anyone}] Flags:1}}
I0823 10:46:59.491327  461733 zklog.go:28] Create(1,{Path:/test Data:[8 8] Acl:[{Perms:31 Scheme:world ID:anyone}] Flags:1})
I0823 10:46:59.492012  461733 zketcd.go:53] Create(1) = (zxid=41); txnresp: {Header:cluster_id:14841639068965178418 member_id:10276657743932975437 revision:41 raft_term:2  Succeeded:true Responses:[response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > >  response_put:<header:<revision:41 > > ]}
I0823 10:46:59.492477  461733 conn.go:139] conn.Send(xid=1, zxid=40, &{Path:/test})
I0823 10:46:59.501334  461733 server.go:73] zkreq={xid:0 err:"read tcp 127.0.0.1:2181->127.0.0.1:34458: read: connection reset by peer"}
I0823 10:46:59.501367  461733 session.go:61] finishing the session... id=7587856633219205200; expect revoke...

Error on drop after close

There seems to be an error that shows up when dropping ZooKeeper after previously closing the connection with ZooKeeper::close. I believe this is because ZooKeeper::drop will try to close the connection, and that will error out if it is already closed.

At this stage nothing particularly bad happens other than an unnecessary attempt to make a request to ZK (to close the session), and an error that is logged.

[<timestamp> ERROR zookeeper::zookeeper] error closing zookeeper connection in drop: ConnectionLoss

Release version 0.2

  • Use released/stable dependencies
  • Remove feature flags (and code parts using them)

Lost watch events

Hi,

In my local setup I have seen several times how watch events seem to be lost by the Zookeper client. This is the code where I watch for events:

fn main () {
    let zk = ZooKeeper::connect("localhost:2181", Duration::from_secs(15), LoggingWatcher).unwrap();

    zk.add_watch(
        "/nodes",
        AddWatchMode::PersistentRecursive,
        |event: WatchedEvent| {
            println!("Event {:?}, path {:?}", event, event.path);
        },
    )
    .unwrap();


    loop {
        thread::sleep(Duration::from_secs(1800));
    }
}

And I'm running zookeeper via docker compose

  zookeeper:
    image: zookeeper
    restart: always
    ports:
      - 2181:2181
  zookeeper-cli:
    image: zookeeper
    command: zkCli.sh -server zookeeper

If I watch events from the Zookeeper CLI I see all of them, however some are not "seen" by the Zookeeper rust client. For example, in the screenshot below you can see a NodeCreated and NodeDeleted for the path /nodes/node0000000079 in the CLI.

image

But I only get one watch in rust

image

Is this a bug in the Zookeeper rust client or should I be looking elsewhere?

Relicense under dual MIT/Apache-2.0

This issue was automatically generated. Feel free to close without ceremony if
you do not agree with re-licensing or if it is not possible for other reasons.
Respond to @cmr with any questions or concerns, or pop over to
#rust-offtopic on IRC to discuss.

You're receiving this because someone (perhaps the project maintainer)
published a crates.io package with the license as "MIT" xor "Apache-2.0" and
the repository field pointing here.

TL;DR the Rust ecosystem is largely Apache-2.0. Being available under that
license is good for interoperation. The MIT license as an add-on can be nice
for GPLv2 projects to use your code.

Why?

The MIT license requires reproducing countless copies of the same copyright
header with different names in the copyright field, for every MIT library in
use. The Apache license does not have this drawback. However, this is not the
primary motivation for me creating these issues. The Apache license also has
protections from patent trolls and an explicit contribution licensing clause.
However, the Apache license is incompatible with GPLv2. This is why Rust is
dual-licensed as MIT/Apache (the "primary" license being Apache, MIT only for
GPLv2 compat), and doing so would be wise for this project. This also makes
this crate suitable for inclusion and unrestricted sharing in the Rust
standard distribution and other projects using dual MIT/Apache, such as my
personal ulterior motive, the Robigalia project.

Some ask, "Does this really apply to binary redistributions? Does MIT really
require reproducing the whole thing?" I'm not a lawyer, and I can't give legal
advice, but some Google Android apps include open source attributions using
this interpretation. Others also agree with
it
.
But, again, the copyright notice redistribution is not the primary motivation
for the dual-licensing. It's stronger protections to licensees and better
interoperation with the wider Rust ecosystem.

How?

To do this, get explicit approval from each contributor of copyrightable work
(as not all contributions qualify for copyright, due to not being a "creative
work", e.g. a typo fix) and then add the following to your README:

## License

Licensed under either of

 * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
 * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

and in your license headers, if you have them, use the following boilerplate
(based on that used in Rust):

// Copyright 2016 rust-zookeeper Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.

It's commonly asked whether license headers are required. I'm not comfortable
making an official recommendation either way, but the Apache license
recommends it in their appendix on how to use the license.

Be sure to add the relevant LICENSE-{MIT,APACHE} files. You can copy these
from the Rust repo for a plain-text
version.

And don't forget to update the license metadata in your Cargo.toml to:

license = "MIT OR Apache-2.0"

I'll be going through projects which agree to be relicensed and have approval
by the necessary contributors and doing this changes, so feel free to leave
the heavy lifting to me!

Contributor checkoff

To agree to relicensing, comment with :

I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.

Or, if you're a contributor, you can check the box in this repo next to your
name. My scripts will pick this exact phrase up and check your checkbox, but
I'll come through and manually review this issue later as well.

Re-evaluate reconnection scenarios

  • Disconnected event should be sent out, and other events should be cleared at connection loss
  • The client should try to reconnect only until sessionTimeout isn't expired
  • Handle zxid at reconnect
  • Handle server initiated close
  • Close socket before reconnection
  • Handle specific socket errors, set timeout

Connecting to ipv6 fails on Windows 10

When trying to connect to a zk server using a ipv6 address, the connection fails with Failed to read socket: Os { code: 10049, kind: AddrNotAvailable.

please let zookeeper io socket set tcp keepalive

We have found this issue when the router is down, and the zk server has sent RST, but the client has not received the RST because the router is down. so, the TCP status is ESTABLISHED but the zk session is totally down, and the client doesn't know it.

the Java codes look like:

import java.net.*;
import org.apache.zookeeper.client.ZKClientConfig;
import org.apache.zookeeper.ZooKeeper;

...

ZKClientConfig zkClientConfig = new ZKClientConfig();
zkClientConfig.setProperty(ZKClientConfig.ZOOKEEPER_SOCK_OPTS, "SO_KEEPALIVE=true");
ZooKeeper zk = new ZooKeeper("localhost:2181", 3000, null, zkClientConfig);

...

Publish new release of zookeeper_derive

Following #49, I think you may also need to cut a new release of zookeeper_derive. The version argument included in /Cargo.toml seems to cause cargo to use zookeeper_derive from crates.io rather than the one included with zookeeper, which uses the old syn and quote dependencies.

Subscription is hidden

The Zookeeper struct leaks information in the form of Subscription. Since the module Subscription is in is not public the docs don't show anything except for an opaque struct (that can't be imported or stored anywhere). I'd be happy to submit a PR fixing this.

Would exposing it in the crate root (pub use listeners::Subscription in lib.rs) or should listeners be made public and the rest of the things in there use a pub(crate) modifier to hide them from the outside world?

Incorrect maintenance of zxid

The code currently blindly adopts the zxid of each new received header:

self.zxid = header.zxid;

However, as I recently discovered while implementing tokio-zookeeper, watch events actually yield a zxid response of -1! This means that, if you were to crash right after handling a watch event, and then reconnect, you would reconnect with zxid = -1, which is not the right value.

It think the code needs to be changed so that it instead adopts the zxid only if the response is not a watcher event.

Increment version and re-publish

I've noticed that the published version 0.5.9 does not include the Latch recipe.

The version needs to be bumped to 0.5.10 and published so this is included.

IO thread panic on Zookeeper client reconnect failure

I have a use case where I am utilizing a zookeeper::ZooKeeper client instance to maintain an ephemeral znode while my application does other work. I've found that the client panics in its reconnection logic on an internal thread when I kill the zookeeper server that I am testing with. This leaves my application running but without the client connection in a functional state.

The backtrace that I see is the following:

thread 'io' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:211
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:227
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:491
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:398
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:325
   7: core::char::methods::<impl char>::escape_debug
             at src/libcore/panicking.rs:95
   8: core::alloc::Layout::repeat
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/macros.rs:26
   9: <zookeeper::acl::Acl as core::clone::Clone>::clone
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/result.rs:808
  10: zookeeper::io::ZkIo::reconnect
             at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:326
  11: zookeeper::io::ZkIo::ready_zk
             at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:429
  12: zookeeper::io::ZkIo::ready
             at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:366
  13: zookeeper::io::ZkIo::ready_timer
             at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:549
  14: zookeeper::zookeeper::ZooKeeper::connect::{{closure}}
             at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/zookeeper.rs:78

I believe this is due to the unwrap() call at this line:

self.poll.deregister(&self.sock).unwrap();

I also have a listener on the connection that right now just logs the state transitions of the client. I see the client go through the Connected -> NotConnected and NotConnected -> Connecting state transitions before the panic happens.

In order to reproduce this behavior I've been using Docker to start and stop a local ZK server using the Docker Hub official Zookeeper Docker image. To run the server and expose a port, you can run docker run --rm -p 2181:2181 --name test-zookeeper -d zookeeper on a machine with docker installed.

I could handle the disconnect from within my application by watching for the NotConnected event and taking action from there (either exiting the rest of the application or trying to rebuild the client) but I think it would be nice to resolve some of this from within the client library as well. It doesn't seem like the client's internal thread should panic, leaving the last client state event the caller receives to be Connecting.

Two options that come to mind for handling this situation are:

  1. Instead of panicking, publish some sort client state indicating it is permanently failed/not connected. It looks like ZkState::Closed might already fit the situation and could potentially be published in this case.
  2. Add a bit more logic to the reconnect routine to continually retry or perhaps have a definable policy to try more times before entering into the state I describe in option one.

What do you think about these options? Would you be amenable to a PR to at the least handle the case where the reconnect fails and we publish a ZkState::Closed event to the listeners?

Remove or replace deprecated mio components

For the mio-0.6 changes, many components were deprecated and moved to the mio-more package. Unfortunately, this does not yet exist on crates.io and might not ever exist there -- the whole thing is unmaintained at the moment.

This leaves a few options...

  1. Rely on mio-more and trust that it will be published/maintained at some point in the future.
  2. Pull the requisite deprecated mio code into this package (it's ~250 lines...so not a huge burden).
  3. Switch to using Tokio.

Option 1 is rather optimistic -- trusting that an unmaintained open-source project will acquire a maintainer runs counter to every experience I have had.

Option 2 is also on the optimistic side -- while the lift is small now (copy and paste), it might not be so easy in the future.

Option 3 is the biggest lift, but I think it is best for long-term maintenance. The majority of the community seems to have embraced Tokio for the slightly-higher-level things. This library will inevitably embrace the futures portion of Tokio when implementing async methods (#5).

Not reconnecting after session timeout

After session timeouts (for example by moving the process to the background), the connection isn't reestablished. Operations end with "error sending request: Disconnected".

zookeeper client go to dead loop if connection port is not listen

i write a demo as follow. if the port is not listened , Client try write socket in dead loop.
`use zookeeper::{ZooKeeper, Watcher, WatchedEvent};
use std::time::Duration;
use std::thread;

struct LoggingWatcher;
impl Watcher for LoggingWatcher {
fn handle(&self, e: WatchedEvent) {
println!("{:?}", e)
}
}

fn main() {
env_logger::init();
let url = "127.0.0.1:2081";
let zk = ZooKeeper::connect(url, Duration::from_secs(1), LoggingWatcher).unwrap();

loop {
    thread::sleep(Duration::from_secs(5));
}

}`

Screen print fail log in dead loop
[2022-09-20T08:57:59Z ERROR zookeeper::io] Failed to write socket: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }

future of rust-zookeeper

Is this project intended to be maintained in the future. rust stable is now about and it seems the source for this project has fell somewhat behind.

in leader_latch ,the zookeeper can't drop after zookeeper network error

leader.rs fn start

let latch = self.clone();
// this step ,while LeaderLatch has field zk,zookeeper has ref self
let subscription = self.zk.add_listener(move |x| handle_state_change(&latch, x));

leader.rs fn stop

   self.set_path(None)?;//if set_path failed, never remove_listener,so the Arc<ZooKeeper> never drop
   ......
   self.zk.remove_listener(sub);        

Implement classic-ZK/Curator recipes

  • Lock (in progress)
  • LeaderSelector (depends on Lock I guess)
  • PathChildrenCache (migrate to Curator)
  • ServiceDiscovery (depends on PathChildrenCache)

Upgrade to 2018 Edition

With the 2021 edition "coming soon", @bonifaido WDYT of upgrading to the 2018 edition to ease the future switch to the 2021 edition when released?

how to receiver event when node data change?

struct LoggingWatcher;
impl Watcher for LoggingWatcher {
    fn handle(&self, e: WatchedEvent) {
        info!("{:?}", e)
    }
}

struct RootWatcher;
impl Watcher for RootWatcher {
    fn handle(&self, e: WatchedEvent) {
        info!("Root->> {:?}", e)
    }
}

fn zk_example2() {
    let zk = ZooKeeper::connect("127.0.0.1/test", Duration::from_secs(15), LoggingWatcher).unwrap();

    zk.add_listener(|zk_state| println!("New ZkState is {:?}", zk_state));

    // how to receiver event when node data change?
    let data = zk.get_data_w("/", RootWatcher).unwrap();

    println!("press enter to close client");
    let mut tmp = String::new();
    io::stdin().read_line(&mut tmp).unwrap();
}

fn main(){
   zk_example2(); 
}

Async support

Given rust supports async now, it would be great to add async support to this crate, e.g. using tokio. These types of crates largely benefit from async interface.

ZooKeeper::connect doesn't crash if it can not connect

I don't have any ZooKeeper server running, but ZooKeeper::connect will not throw an error. My expectation is that if it can not connect, it notifies me. Furthermore, take this code:

use zookeeper::{ZooKeeper, Watcher, WatchedEvent};
use std::time::Duration;

struct LoggingWatcher;
impl Watcher for LoggingWatcher {
    fn handle(&self, e: WatchedEvent) {
        println!("{:?}", e)
    }
}

fn main() {
    // Some url where zookeeper isn't running
    let url = "129.0.0.1:2182";
    let zk = ZooKeeper::connect(url, Duration::from_secs(1), LoggingWatcher).unwrap();

    zk.add_listener(|s| println!("New state is {:?}", s));

    let x = zk.exists("/test", false).unwrap();

    println!("Connected!");
}

zk.exists takes a lot longer than the specified 1 second in the connect method. It takes like a half a minute minute to fail

Build failed on ubuntu 14.04.1

Build this crate failed on ubuntu 14.04.1:

Host info:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"

Build command: cargo build --release --bin front_service --manifest-path ./core/Cargo.toml --verbose

Build log:

   Compiling zookeeper v0.3.0
     Running `rustc --crate-name zookeeper /root/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.3.0/src/lib.rs --crate-type lib --emit=dep-info,link -C opt-level=3 -C metadata=18afff2b8efa19af -C extra-filename=-18afff2b8efa19af --out-dir /home/ubuntu/web_service/target/release/deps -L dependency=/home/ubuntu/web_service/target/release/deps --extern log=/home/ubuntu/web_service/target/release/deps/liblog-c73672fbee7ce8e5.rlib --extern mio=/home/ubuntu/web_service/target/release/deps/libmio-1e63d8e5040d2907.rlib --extern snowflake=/home/ubuntu/web_service/target/release/deps/libsnowflake-9e3c0622cb044461.rlib --extern lazy_static=/home/ubuntu/web_service/target/release/deps/liblazy_static-593470b8b1e9df82.rlib --extern zookeeper_derive=/home/ubuntu/web_service/target/release/deps/libzookeeper_derive-c12483d6b5c097d3.so --extern byteorder=/home/ubuntu/web_service/target/release/deps/libbyteorder-b7e41c93a912a264.rlib --extern bytes=/home/ubuntu/web_service/target/release/deps/libbytes-9e91f13708218df5.rlib --cap-lints allow`
rustc: /checkout/src/llvm/lib/Analysis/ValueTracking.cpp:1594: void computeKnownBits(const llvm::Value*, llvm::APInt&, llvm::APInt&, unsigned int, const {anonymous}::Query&): Assertion `(KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"' failed.
error: Could not compile `zookeeper`.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.