amiller / honeybadgerbft Goto Github PK
View Code? Open in Web Editor NEWThe Honey Badger of BFT Protocols
License: Other
The Honey Badger of BFT Protocols
License: Other
Hi, I'm running the code in Ubuntu20.04, but it failed.
When I enter the command python -m HoneyBadgerBFT.threshenc.generate_keys 4 2
, it return initialization of pairing failed without raising an exception
My Python is 3.8, the charm is 0.5.
Currently, HoneyBadgerBFT requires unbounded communication buffers, for both outgoing messages and incoming messages. The problem is that in an asynchronous network, honest nodes may lag arbitrarily far behind, so the backlog of messages to deliver may grow without bound. Since the protocol's security relies on eventual delivery of every message, the only way to emulate this in an unreliable network is to store and resend messages indefinitely until a read receipt is obtained.
This problem is discussed also in issue #12. This issue presents a different approach. The approach is based on two ideas, 1. guaranteeing that each block requires a bounded amount of messages, and 2. only buffering messages for at most a constant number of active blocks. After every block is finalized, nodes produce a threshold signature on that block, which serves as a checkpoint. Then we do not need to guarantee eventual delivery for every message. Instead, messages pertaining to old blocks can be canceled, nodes that have fallen behind can catch up using the checkpoints.
Discussion below:
HoneyBadgerBFT is written and formally analyzed in the Reliable Asynchronous Communications model. This means that every message sent from one honest party to another is guaranteed to be eventually delivered. This is considered the weakest network model used in distributed consensus protocols, as there is no guarantee about how much time it takes for a message to be delivered. However, even this network model is idealized, and it is difficult to fulfill the model in practice.
Reliable asynchronous I/O abstraction:
- send(msg, recipient)
Msg is eventually delivered to recipient
- on receive(msg) from sender do { … }
In reality, networks are unreliable, and messages may be dropped. In practice, a distributed system relies on an I/O abstraction that emulates the asynchronous model by a) robustly reforming connections after a network interruption e.g. whenever a TCP socket is broken, b) sending a receipt for every message received, and c) resending messages until the receipt is received. An unfortunate consequence of this approach is that outgoing messages to be sent may need to be queued for an arbitrary amount of time in unbounded storage buffers. Furthermore, protocol is written in a coroutine-style, where messages are implicitly filtered matched: e.g. “Wait to receive ECHO from S. Then, wait to receive READY from S.” Messages that are received before any process is waiting to receive them are implicitly expected to be buffered. If a node falls behind, e.g. because of a network partition, then an arbitrary number of incoming messages from future rounds may need to be buffered.
Unreliable Asynchronous I/O abstraction:
- sendUnreliable(msg, recipient):
If the same msg is retried indefinitely, then it is eventually delivered.
Reliable Asynchronous I/O from Unreliable I/O:
- send(msg, recipient):
Store msg in buffer.
While msg is in buffer:
sendUnreliable(msg, recipient)
- on receiveUnreliable(msg) from sender do:
Send (RECEIPT, msg) to sender
receive(msg)
- on receiveUnreliable( RECEIPT, msg ) from recipient do:
Remove msg from buffer
This unbounded-buffers approach to emulating reliable communication is implemented in the HoneyBadgerBFT-Python prototype, as well as the initial HoneyBadgerBFT-Go implementation. However, the need to provide unbounded buffers means that the software may end up consuming all the machine’s available disk space or RAM. If messages are dropped when the buffers full up, then the network model no longer fulfills the guarantees assumed by the security analysis, and hence the security proof cannot be guaranteed. In principle, more resources could dynamically be added (e.g., hot swap in additional hard drives or make use of a network file system on a LAN).
This issue describes how the HoneyBadgerBFT protocol can be extended to require only bounded storage in the unreliable network model, without adopting any additional network assumptions.
The proposed approach is based on the following two observations:
Observation 1. Within a single instance of the HBBFT-Block protocol, a bounded number of messages may be sent by any honest party.
This observation holds immediately for the N instances of the RBC
protocol, and for the threshold decryption in HBBFT-Block. The VAL
messages need to be bounded.
It is less clear that the instances of ABA
can be bounded; as written, the ABA
protocol proceeds in rounds, each round making use of a common COIN
, until a termination condition is reached, which does not occur with any a priori bound. However, the running time analysis of the protocol suggests that even in the worst case, an instance of ABA
requires more than k
coins with probability O(2^-k)
. Thus it suffices to establish a bound, say k=120
.
Observation 2. Although the entire blockchain of committed transactions (T
is the total number of transactions) grows unboundedly, the size of the current “state” S
at any given time is typically much smaller and may be considered bounded.
For example, |S|
may be bounded by the number of active accounts, whereas |T| is the number of transactions made by all the accounts.
State S_1, ,... S_B where S_B = apply(S_{B-1}, B.txs).
Hence if a node falls many blocks behind (or even if it crashes and restarts), then it can “catch up” to the current block by downloading just the current state S
rather than the entire log of transactions. This is known as SPV-syncing in Bitcoin, and fast-sync in Ethereum.
The proposed approach combines these two observations to ensure that messages are only buffered.
After finalizing each HBBFT-Block, nodes produce a t+1
threshold signature on a merkle tree over all the transactions, as well as a merkle tree over the current state file S
. This signature serves as a CHECKPOINT, a succinct piece of evidence that the current block has concluded. CHECKPOINTs are used for three different purposes:
I. Prevent DoS from malicious nodes. Messages for future rounds B’ > B
are ignored until a CHECKPOINT message for round B is received. This enables a node to discard DoS messages sent from an attacker, which would otherwise appear to be plausible messages in the future.
II. Allows lagging (or restarted) nodes to recover. When a node receives a valid CHECKPOINT for a later block than the current one (for block B’ > B
), then it determines it has fallen behind. It deallocates any buffered incoming or outgoing messages in block B
, and
III. Allow nodes to garbage collect old outgoing messages. Because nodes that have fallen behind can catch up via a CHECKPOINT, it is not necessary to buffer outgoing protocol messages pertaining to earlier blocks. Outgoing messages buffered in the I/O abstraction can be canceled/withdrawn, replaced with CHECKPOINT messages for the current round.
It is very common for open source Python projects to host their documentation on Read the Docs.
It is usually relatively simple to set up.
There are some limitations however in terms of what can be installed beyond Python packages specified in a requirements.txt
file, and running python setup.py install
. This means that for libraries that require installing C header files for instance, it does not seem to be possible to instruct Read the Docs to perform those required installations. In other words, it does not seem to be possible to instruct Read the Docs to execute:
apt-get -y install libgmp-dev libmpc-dev
wget https://crypto.stanford.edu/pbc/files/pbc-0.5.14.tar.gz
tar -xvf pbc-0.5.14.tar.gz
cd pbc-0.5.14 && ./configure && make && make install
git clone https://github.com/JHUISI/charm.git
cd charm && git checkout 2.7-dev && ./configure.sh && python setup.py install
Since the above instructions are required before running python setup.py install
to install honeybadgerbft
, and installing honeybadgerbft
is required in order to build documentation from the the docstrings, it is then not simple to set up the docs on Read the Docs.
Perhaps there are workarounds, such as indicated here, but this means it will take slightly longer to set up.
Another approach would be to publish the docs on Github Pages. For a concrete example, one may look at the asyncpg project's travis-publish-docs.sh script.
Hi @amiller
Reading the paper I noticed in Figure 2 (page 9) "wait to receive at least t + 1
messages...", I believe this should be f + 1
? Otherwise I'm a bit lost on what t
means here.
Very good work. Thanks!
Some outgoing messages may be able to be marked as stale, where it no longer matters if they're re-sent. For example, once we get a final signature for a block, no messages pertaining to the previous round should matter.
How can we annotate the protocol to take advantage of this? What does this tell us about the maximum size of a buffer?
Almost every communication in honey badger is "broadcast" only. The only exception is in Reliable Broadcast where different erasure coded shares are sent.
Can the communication channel abstraction help with this?
For incoming messages, Asynchronous model means messages pertaining to "stale" subprotocols that have since concluded might be able to be ignored.
When can we safely mark a subprotocol as concluded and free up state? Can we express filter rules to efficiently discard old messages, e.g. messages pertaining to subprotocols in the previous round get discarded immediately?
I doubt this would happen but just to avoid the complications of someone else registering a package with the same name, say "honeybadger big file transfers --> honeybadgerbft", it may be a good idea to register the package on PyPI, to reserve the package name honeybadgerbft
.
No need to upload code.
I can take care of this, but will nevertheless provide links to relevant documentation in upcoming days.
In the mean time here's a sample of what this could look like (please note that this is on the test server of PyPI): https://testpypi.python.org/pypi/honeybadgerbft/0.1.0
The package owner can be changed, so once you are setup on PyPI, we could make the change.
@amiller I assume you are well aware of this as there's already a TODO
note in the code about implementing the random selection.
Nevertheless, regardless of the size, only one element (tx_to_send[0]
) is currently passed to _run_round()
:
class HoneyBadgerBFT():
# ...
def run(self):
# ...
while True:
# ...
# Select all the transactions (TODO: actual random selection)
tx_to_send = self.transaction_buffer[:self.B]
# Run the round
send_r = _make_send(r)
recv_r = self._per_round_recv[r].get
new_tx = self._run_round(r, tx_to_send[0], send_r, recv_r)
So _run_round()
and tpke.encrypt()
should be capable to take a list
or a similar data structure.
tpke.encrypt()
would need to be modified so that a string (e.g.: cPickle.dumps(raw)
) is passed to the padding operation.
So this issue could be done in two (or three) parts:
_run_round()
, e.g.: tx_to_send[:1]
floor(B/N)
transactions to _run_round()
, e.g.: tx_to_send[:int(B/N)]
It's no longer a true Bilinear pair if G1 and G2 are the same values. So I am confused about the implementation.
boldyreva.py uses two unique G1 and G2 for the pair, however, it does not return the message identical and verify_signature
just returns True
without proper authentication.
Is this project unfinished or am I missing something?
Hi,
I have successfully built the docker image but when I try to execute the following docker command:
docker run -e N="8" -e t="2" -e B="16" -it honeybadgerbft
I get the following error
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/HoneyBadgerBFT/test/honest_party_test.py", line 4, in
monkey.patch_all()
File "/usr/local/lib/python2.7/dist-packages/gevent/monkey.py", line 889, in patch_all
_notify_patch(events.GeventWillPatchAllEvent(modules_to_patch, kwargs), _warnings)
File "/usr/local/lib/python2.7/dist-packages/gevent/monkey.py", line 167, in _notify_patch
notify_and_call_entry_points(event)
File "/usr/local/lib/python2.7/dist-packages/gevent/events.py", line 112, in notify_and_call_entry_points
subscriber = plugin.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py", line 2317, in load
self.require(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py", line 2340, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py", line 779, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (greenlet 0.4.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('greenlet>=0.4.13'))
I am running ubuntu 16.04. Any suggestions on how to solve this issue?
Thanks a lot for your help
In a most recent benchmark (unpublished) threshold decryption becomes the bottleneck after applying optimistic coin flips.
What's lost if we remove it? Or apply it only partially (for some nodes in an epoch?) Can we characterize the network assumptions under which this doesn't affect censorship-resistance?
A visualization of the protocol flow could facilitate understanding the protocol flow for newcomers. It could also provide a way to help debugging or diagnosing network problems/faults. Also it could help build a mental model for the relative costs and bandwidth usage for protocol components.
One of HoneyBadgerBFT's features is its regular message pattern. Each possible message fits into a predefined possibly possible slot. To me this suggests a "grid" layout to make the structure as clear as possible. Here's a mockup/sketch of a visualization panel:
This would be a display of a single node's view of a single block/epoch of the honey badger protocol. The idea is that each possible message that could be received would be indicated by an unlit light (grey square), when such message is received, the light turns yellow. Messages sent
This visualization makes some invariants clear. For example, in Reliable Broadcast, only one ECHO
message and one READY
message can be received from any node (redundant such messages are discarded). In the case of Binary Agreement, possibly both of EST(0)
or EST(1)
could be received.
According to the paper, common coin should
use the threshold signature as a source of random bits
but, it seems that the implementation (of shared_coin)
amiller/HoneyBadgerBFT/blob/master/core/broadcasts.py#L105
just abandon the (combined) signature, and use PK hash as source of randomness.
if len(received[r]) == t + 1:
h = PK.hash_message(str((r, instance)))
def tmpFunc(r, t):
combine_and_verify(h, dict(tuple((t, deserialize1(sig)) for t, sig in received[r])[:t+1]))
outputQueue[r].put(ord(serialize(h)[0]) & 1) # explicitly convert to int
Greenlet(
tmpFunc, r, t
).start()
Is this a typo or am I missing something?
This is needed to help in troubleshooting encoding (unicode/bytes) problems arising in trying to port to Python 3 (#25).
For instance, the small helper method xor()
fails under Python 3 and having a test in Python 2 that passes can help troubleshooting the problem.
will add details soon
Consider the Common Coin protocol. In this protocol, every message is of the form COIN(r, A, B)
, which indicates a threshold signature share for the r
th sequential coin, sent from A
to B
.
Right now, a node accepts an incoming COIN
message for any value r >= 0
, even a round arbitrarily far in the future. Processing this message includes storing and verifying the signature share (an expensive pairing operation). This leads to a potential hard-to-attribute (in the sense of #11) resource exhaustion attack where an attacker node sends lots of spurious (but valid) signature shares.
Let's apply some ratcheting to messages sent along the asynchronous channels. Recipients recipients can discard some unsolicited messages received too early, before they're "ready". The main idea is that each sender keeps track of some partial view of the progress of each recipient (like a vector clock). The sender is then expected to queue up messages until the recipient is "ready."
For example, considering the Common Coin example above, we should have node A
discard any received message COIN(r+1, B, A)
until the message COIN(r, A, B)
has been sent.
This approach effectively trades off some queuing at the recipient for queuing at the sender. This ticket is meant to start a discussion exploring when this approach is advantageous.
In the example above, the recipient's queuing could be unbounded (messages received from Byzantine nodes can be at any round), whereas (honest) senders' queues would be bounded in expectation (only constant expected number of common coin iterations in each run of binary agreement).
I read your paper. I'm interested in testing out HoneyBadger for a project and was curious -
what are the chances for a dual Apache/CRAPL license?
Thanks to Ethan MacBrough (from Ripple Research) for reporting a protocol design error in HoneyBadgerBFT. The error has to do with the use of Threshold-Signature-based Common Coin to instantiate the Common Coin dependency of the ABA routine. MacBrough correctly points out that a flaw in the proof is that the ABA protocol we use relies on a stronger common coin than the one we instantiate (in fact it relies on a common coin that is so strong it cannot be implemented), and in fact provides a concrete attack. Fortunately, the ABA protocol can be readily modified to accommodate our weaker common coin, and hence fix the protocol. The fix suggested by MacBrough is posted at the end of the message.
To close this issue, we need to deploy the following fixes:
Ethan's note is below:
I thought I should let you know that I found a (probably very minor in practice) issue that can make the binary agreement instantiation used in HoneyBadger potentially never terminate with a single Byzantine node when the network scheduler is actively adversarial. I realized the issue when I was working on Cobalt, wherein I fixed the issue by adding an extra message phase to each round. I just remembered that HoneyBadger uses the same binary consensus mechanism, so I should let you know about the issue.
The basic issue is that Moustefaoi et al.'s consensus algorithm requires a stronger common coin than the instantiation given by Cachin et al. can provide. Namely, they assume that the values of the common coin are completely independent of the state of EVERY honest node at the point where it queries the coin value. This is not guaranteed by Cachin's common coin, since the adversary can reconstruct the coin value after only some of the honest nodes have queried the coin, and then inject "bad state" into the other nodes.
Specifically, there's an attack that can be launched as follows. For the attack I assume the common coin has a threshold of at most 2f+1, where n=3f+1; From what I read, HoneyBadger uses a threshold of only f+1, so the attack still holds. I present the more general argument since 2f+1 is the maximum threshold that can be used (since otherwise f crashed nodes could block coin construction), so the attack shows that a simple modification of the threshold doesn't help.
Let S be the set of all nodes, with
|S|=3f+1
. PartitionS
into 4 subsets,A0
,A1
,B
, andF
; where|A0|=|A1|=|B|=f
, and|F|=1
. Also letA=A0\cup A1
. Suppose every node inA
starts out in roundr
votingv\in\{0,1\}
and every node inB
starts out voting\neg v
. The nodex\in F
is Byzantine.
x
sendsBVAL(\neg v)
to the nodes inA0
andBVAL(v)
to the nodes inA1
. Also, all votes from nodes inB
are delivered to all nodes inA.
Messages withinA0
are delivered. Thus nodes inA0
see|B|+|F|=f+1
votes for\neg v
; so all nodes inA0
broadcastBVAL(\neg v)
and all nodes inA0
see|A0|+|B|+|F|=2f+1
votes for\neg v
; so all nodes inA0
broadcastAUX(\neg v)
.
Then all messages within
A1
are delivered, as well as theBVAL(v)
messages fromA0
toA1
. Thus the nodes inA1
see|A0|+|A1|+|F|=2f+1
votes forv
and broadcastAUX(v)
. After this all messages withinA
are delivered andx
sends bothBVAL(0)
andBVAL(1)
to every node inA
. Thus every node inA
broadcasts bothBVAL(0)
andBVAL(1)
and setsbin_values=\{0,1\}
.
Now all nodes in
A
broadcast their threshold shares over the coin, so since|A|+|F|=2f+1
, the adversary can construct the random coin value s. The nodes inF
sendBVAL(\neg s)
to all the nodes inB
, and all theBVAL(\neg s)
messages from nodes inA
are delivered to all nodes inB
. Thus all the nodes inB
broadcastAUX(\neg s)
. Deliver allAUX(\neg s)
messages; there are2f+1
of them, since either every node inA0
broadcastAUX(\neg s)
or every node inA1
broadcastAUX(\neg s)
. Thus all nodes inB
see2f+1 AUX(\neg s)
messages and get to the end of the round withbin_values=\neg s
. Thus the nodes inB
continue to the next round voting\neg s
while the nodes inA
continue to the next round votings
. At this point all messages from the round are delivered, and the process repeats.
Ethan MacBrough suggests the following fix, which is also used in Cobalt
The exact fix I came up with is, after passing the
AUX
message round, everyone broadcastsCONF(bin_values)
and then waits until there aren-t
CONF(S)
messages withS\subseteq bin_values
. Clearly everyone can eventually get past this, just like theAUX
message round.
Now suppose some node
P_i
gets past theCONF
round withbin_values={v}
. Thent+1
honest nodes must have broadcastCONF({v})
. Thus for any other honest nodeP_j
that gets past theCONF
round,P_j
must have receivedCONF({v})
from some honest node before getting past (since it waits forn-t
CONF
messages). But two honest nodes cannot submitCONF({0})
andCONF({1})
by the security guarantees of theAUX
round, so this means the valuev
was determined beforeP_j
got past theCONF
step, and hence beforeP_j
revealed its threshold share. Thus the adversary learns nothing about the coin value until afterv
is determined, sov=s
with probability1/2
.
After crashing and restarting, can we resume the protocol in progress?
The easiest way would be to get a threshold signature on a most recent round, and just start there.
Getting an old signature could lead to equivocation, like sending messages in a round we've already participated in. This would be tolerated by the other nodes and not cause a failure, as long as not too many occurred at once.
Failed to build charm-crypto
Installing collected packages: greenlet, gevent, gmpy2, pysocks, pycrypto, ecdsa, zbase32, pyutil, zfec, gipc, pyparsing, charm-crypto, decorator, wcwidth, six, prompt-toolkit, backports.shutil-get-terminal-size, ptyprocess, pexpect, scandir, pathlib2, pygments, ipython-genutils, enum34, traitlets, simplegeneric, pickleshare, ipython, ipdb, coverage, configparser, mccabe, pyflakes, pycodestyle, flake8, py, pytest, pytest-cov, termcolor, pytest-sug
ar, nose2, MarkupSafe, jinja2, snowballstemmer, alabaster, imagesize, pytz, babel, docutils, chardet, certifi, urllib3, idna, requests, typing, sphinxcontrib-websupport, Sphinx, singledispatch, backports-abc, tornado, livereload, pathtools, argh, PyYAML, watchdog, port-for, sphinx-autobuild, sphinx-rtd-theme, honeybadgerbft
Running setup.py install for charm-crypto: started
Running setup.py install for charm-crypto: finished with status 'error'
Complete output from command /usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-G9czQT/charm-crypto/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-7FJwPM-record/install-record.txt --single-version-externally-managed --compile:
('Platform:', 'Linux')
('Config file:', 'config.mk')
Warning, using default config vaules.
You probably want to run ./configure.sh first.
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/charm
copying charm/config.py -> build/lib.linux-x86_64-2.7/charm
copying charm/__init__.py -> build/lib.linux-x86_64-2.7/charm
creating build/lib.linux-x86_64-2.7/charm/core
copying charm/core/__init__.py -> build/lib.linux-x86_64-2.7/charm/core
creating build/lib.linux-x86_64-2.7/charm/core/crypto
copying charm/core/crypto/__init__.py -> build/lib.linux-x86_64-2.7/charm/core/crypto
creating build/lib.linux-x86_64-2.7/charm/core/engine
copying charm/core/engine/__init__.py -> build/lib.linux-x86_64-2.7/charm/core/engine
copying charm/core/engine/util.py -> build/lib.linux-x86_64-2.7/charm/core/engine
copying charm/core/engine/protocol.py -> build/lib.linux-x86_64-2.7/charm/core/engine
creating build/lib.linux-x86_64-2.7/charm/core/math
copying charm/core/math/__init__.py -> build/lib.linux-x86_64-2.7/charm/core/math
creating build/lib.linux-x86_64-2.7/charm/test
copying charm/test/__init__.py -> build/lib.linux-x86_64-2.7/charm/test
creating build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/encap_bchk05_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/dabenc_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/rsa_alg_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/chamhash_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/hibenc_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/commit_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/abenc_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/pkenc_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/__init__.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/ibenc_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/grpsig_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/pk_vrf_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
copying charm/test/schemes/pksig_test.py -> build/lib.linux-x86_64-2.7/charm/test/schemes
creating build/lib.linux-x86_64-2.7/charm/test/toolbox
copying charm/test/toolbox/symcrypto_test.py -> build/lib.linux-x86_64-2.7/charm/test/toolbox
copying charm/test/toolbox/paddingschemes_test.py -> build/lib.linux-x86_64-2.7/charm/test/toolbox
copying charm/test/toolbox/__init__.py -> build/lib.linux-x86_64-2.7/charm/test/toolbox
copying charm/test/toolbox/conversion_test.py -> build/lib.linux-x86_64-2.7/charm/test/toolbox
copying charm/test/toolbox/secretshare_test.py -> build/lib.linux-x86_64-2.7/charm/test/toolbox
creating build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/pairinggroup.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/securerandom.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/integergroup.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/symcrypto.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/PKSig.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/FSA.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/Hash.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/sigmaprotocol.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/matrixops.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/ABEncMultiAuth.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/Commit.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/hash_module.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/conversion.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/eccurve.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/pairingcurves.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/IBSig.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/IBEnc.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/paddingschemes_test.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/secretutil.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/enum.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/iterate.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/redundancyschemes.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/__init__.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/paddingschemes.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/specialprimes.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/ecgroup.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/ABEnc.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/zknode.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/reCompiler.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/xmlserialize.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/policytree.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/bitstring.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/node.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/DFA.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/schemebase.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/PKEnc.py -> build/lib.linux-x86_64-2.7/charm/toolbox
copying charm/toolbox/secretshare.py -> build/lib.linux-x86_64-2.7/charm/toolbox
creating build/lib.linux-x86_64-2.7/charm/zkp_compiler
copying charm/zkp_compiler/zkparser.py -> build/lib.linux-x86_64-2.7/charm/zkp_compiler
copying charm/zkp_compiler/zkp_generator.py -> build/lib.linux-x86_64-2.7/charm/zkp_compiler
copying charm/zkp_compiler/zk_demo.py -> build/lib.linux-x86_64-2.7/charm/zkp_compiler
copying charm/zkp_compiler/__init__.py -> build/lib.linux-x86_64-2.7/charm/zkp_compiler
creating build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/lem_scheme.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/protocol_cns07.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/joye_scheme.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/pre_mg07.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/chamhash_adm05.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/pk_fre_ccv11.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/sigma3.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/encap_bchk05.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/chamhash_rsa_hw09.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/protocol_ao00.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/dabe_aw11.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/pk_vrf.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/sigma1.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/protocol_schnorr91.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/protocol_a01.py -> build/lib.linux-x86_64-2.7/charm/schemes
copying charm/schemes/sigma2.py -> build/lib.linux-x86_64-2.7/charm/schemes
creating build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_waters05.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_bf01.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_waters05_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_waters09_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_waters09.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_bb03.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_lsw08.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
# ...
copying charm/schemes/ibenc/ibenc_CW13_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_cllww12_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_sw05.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
copying charm/schemes/ibenc/ibenc_ckrs09.py -> build/lib.linux-x86_64-2.7/charm/schemes/ibenc
creating build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/abenc_waters09.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/tbpre-liu-14.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/abenc_yct14.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/maabe-yang-14.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/abenc_lsw08.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/dac-macs-yang-14.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/dfa_fe12.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/abenc_bsw07.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
copying charm/schemes/abenc/pk_hve08.py -> build/lib.linux-x86_64-2.7/charm/schemes/abenc
creating build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_elgamal85.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_rabin.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_paillier99.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_gm82.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_cs98.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
copying charm/schemes/pkenc/pkenc_rsa.py -> build/lib.linux-x86_64-2.7/charm/schemes/pkenc
creating build/lib.linux-x86_64-2.7/charm/schemes/hibenc
copying charm/schemes/hibenc/hibenc_lew11.py -> build/lib.linux-x86_64-2.7/charm/schemes/hibenc
copying charm/schemes/hibenc/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/hibenc
copying charm/schemes/hibenc/hibenc_bb04.py -> build/lib.linux-x86_64-2.7/charm/schemes/hibenc
creating build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_rsa_hw09.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_cl03.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_cl04.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_waters.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_cyh.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_schnorr91.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_hw.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_ecdsa.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_boyen.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_waters09.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_waters05.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_cllww12_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_chp.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_chch.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_dsa.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_CW13_z.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_hess.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
copying charm/schemes/pksig/pksig_bls04.py -> build/lib.linux-x86_64-2.7/charm/schemes/pksig
creating build/lib.linux-x86_64-2.7/charm/schemes/commit
copying charm/schemes/commit/commit_pedersen92.py -> build/lib.linux-x86_64-2.7/charm/schemes/commit
copying charm/schemes/commit/commit_gs08.py -> build/lib.linux-x86_64-2.7/charm/schemes/commit
copying charm/schemes/commit/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/commit
creating build/lib.linux-x86_64-2.7/charm/schemes/grpsig
copying charm/schemes/grpsig/groupsig_bgls04.py -> build/lib.linux-x86_64-2.7/charm/schemes/grpsig
copying charm/schemes/grpsig/__init__.py -> build/lib.linux-x86_64-2.7/charm/schemes/grpsig
copying charm/schemes/grpsig/groupsig_bgls04_var.py -> build/lib.linux-x86_64-2.7/charm/schemes/grpsig
creating build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/pkenc_adapt_chk04.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/dabenc_adapt_hybrid.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/__init__.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/pkenc_adapt_bchk05.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/abenc_adapt_hybrid.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/ibenc_adapt_hybrid.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/kpabenc_adapt_hybrid.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/pksig_adapt_naor01.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/ibenc_adapt_identityhash.py -> build/lib.linux-x86_64-2.7/charm/adapters
copying charm/adapters/pkenc_adapt_hybrid.py -> build/lib.linux-x86_64-2.7/charm/adapters
running build_ext
building 'charm.core.math.pairing' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/charm
creating build/temp.linux-x86_64-2.7/charm/core
creating build/temp.linux-x86_64-2.7/charm/core/math
creating build/temp.linux-x86_64-2.7/charm/core/math/pairing
creating build/temp.linux-x86_64-2.7/charm/core/utilities
creating build/temp.linux-x86_64-2.7/charm/core/benchmark
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DBENCHMARK_ENABLED=1 -Icharm/core/utilities/ -Icharm/core/benchmark/ -I/usr/local/include/python2.7 -c charm/core/math/pairing/pairingmodule.c -o build/temp.linux-x86_64-2.7/charm/core/math/pairing/pairingmodule.o
In file included from charm/core/math/pairing/pairingmodule.c:1719:0:
charm/core/benchmark/benchmark_util.c: In function ‘InitBenchmark’:
charm/core/benchmark/benchmark_util.c:100:3: warning: ‘RAND_pseudo_bytes’ is deprecated [-Wdeprecated-declarations]
RAND_pseudo_bytes(group->bench_id, ID_LEN);
^~~~~~~~~~~~~~~~~
In file included from /usr/include/openssl/bn.h:31:0,
from /usr/include/openssl/asn1.h:24,
from /usr/include/openssl/objects.h:916,
from charm/core/math/pairing/pairingmodule.h:44,
from charm/core/math/pairing/pairingmodule.c:30:
/usr/include/openssl/rand.h:47:1: note: declared here
DEPRECATEDIN_1_1_0(int RAND_pseudo_bytes(unsigned char *buf, int num))
^
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DBENCHMARK_ENABLED=1 -Icharm/core/utilities/ -Icharm/core/benchmark/ -I/usr/local/include/python2.7 -c charm/core/utilities/base64.c -o build/temp.linux-x86_64-2.7/charm/core/utilities/base64.o
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DBENCHMARK_ENABLED=1 -Icharm/core/utilities/ -Icharm/core/benchmark/ -I/usr/local/include/python2.7 -c charm/core/benchmark/benchmarkmodule.c -o build/temp.linux-x86_64-2.7/charm/core/benchmark/benchmarkmodule.o
charm/core/benchmark/benchmarkmodule.c: In function ‘Benchmark_print’:
charm/core/benchmark/benchmarkmodule.c:234:77: warning: format ‘%S’ expects argument of type ‘wchar_t *’, but argument 2 has type ‘PyObject * {aka struct _object *}’ [-Wformat=]
PyObject *results = _PyUnicode_FromFormat("<--- Results --->\nCPU Time: %Sms\nReal Time: %Ss\nAdd:\t%i\nSub:\t%i\nMul:\t%i\nDiv:\t%i\nExp:\t%i\nPair:\t%i\n",
^
charm/core/benchmark/benchmarkmodule.c:234:94: warning: format ‘%S’ expects argument of type ‘wchar_t *’, but argument 3 has type ‘PyObject * {aka struct _object *}’ [-Wformat=]
PyObject *results = _PyUnicode_FromFormat("<--- Results --->\nCPU Time: %Sms\nReal Time: %Ss\nAdd:\t%i\nSub:\t%i\nMul:\t%i\nDiv:\t%i\nExp:\t%i\nPair:\t%i\n",
^
gcc -pthread -shared build/temp.linux-x86_64-2.7/charm/core/math/pairing/pairingmodule.o build/temp.linux-x86_64-2.7/charm/core/utilities/base64.o build/temp.linux-x86_64-2.7/charm/core/benchmark/benchmarkmodule.o -L/usr/local/lib -lpbc -lgmp -lcrypto -lpython2.7 -o build/lib.linux-x86_64-2.7/charm/core/math/pairing.so
building 'charm.core.math.integer' extension
creating build/temp.linux-x86_64-2.7/charm/core/math/integer
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DBENCHMARK_ENABLED=1 -Icharm/core/utilities/ -Icharm/core/benchmark/ -I/usr/local/include/python2.7 -c charm/core/math/integer/integermodule.c -o build/temp.linux-x86_64-2.7/charm/core/math/integer/integermodule.o
charm/core/math/integer/integermodule.c: In function ‘bnToLongObj’:
charm/core/math/integer/integermodule.c:129:19: error: dereferencing pointer to incomplete type ‘BIGNUM {aka struct bignum_st}’
for(i = 0; i < m->dmax; i++) {
^~
charm/core/math/integer/integermodule.c: In function ‘mpzToBN’:
charm/core/math/integer/integermodule.c:166:6: warning: implicit declaration of function ‘bn_expand2’ [-Wimplicit-function-declaration]
if(bn_expand2(b, size) == NULL)
^~~~~~~~~~
charm/core/math/integer/integermodule.c:166:26: warning: comparison between pointer and integer
if(bn_expand2(b, size) == NULL)
^~
charm/core/math/integer/integermodule.c:170:3: warning: implicit declaration of function ‘bn_correct_top’ [-Wimplicit-function-declaration]
bn_correct_top(b);
^~~~~~~~~~~~~~
charm/core/math/integer/integermodule.c: In function ‘genRandomPrime’:
charm/core/math/integer/integermodule.c:1482:5: warning: ‘BN_generate_prime’ is deprecated [-Wdeprecated-declarations]
BN_generate_prime(bn, bits, safe, NULL, NULL, NULL, NULL);
^~~~~~~~~~~~~~~~~
In file included from /usr/include/openssl/bn.h:31:0,
from /usr/include/openssl/asn1.h:24,
from /usr/include/openssl/objects.h:916,
from charm/core/math/integer/integermodule.h:44,
from charm/core/math/integer/integermodule.c:30:
/usr/include/openssl/bn.h:285:1: note: declared here
DEPRECATEDIN_0_9_8(BIGNUM *BN_generate_prime(BIGNUM *ret, int bits, int safe,
^
charm/core/math/integer/integermodule.c:1485:5: warning: ‘BN_generate_prime’ is deprecated [-Wdeprecated-declarations]
BN_generate_prime(bn, bits, FALSE, NULL, NULL, NULL, NULL);
^~~~~~~~~~~~~~~~~
In file included from /usr/include/openssl/bn.h:31:0,
from /usr/include/openssl/asn1.h:24,
from /usr/include/openssl/objects.h:916,
from charm/core/math/integer/integermodule.h:44,
from charm/core/math/integer/integermodule.c:30:
/usr/include/openssl/bn.h:285:1: note: declared here
DEPRECATEDIN_0_9_8(BIGNUM *BN_generate_prime(BIGNUM *ret, int bits, int safe,
^
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-G9czQT/charm-crypto/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-7FJwPM-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-G9czQT/charm-crypto/
The command '/bin/sh -c pip install --process-dependency-links -e .[dev]' returned a non-zero code: 1
For example the function hash()
in reliablebroadcast.py could be confused with the Python built-in function hash()
. It could perhaps be renamed something like sha256_digest()
just to avoid confusion and potential problems in the future.
Another example is the usage of the callable namedinput()
(e.g. in reliablebroadcast()
:
def reliablebroadcast(sid, pid, N, f, leader, input, receive, send):
"""Reliable broadcast ... """"
# ...
if pid == leader:
# The leader erasure encodes the input, sending one strip to each participant
m = input() # block until an input is received
# ...
input()
is also a Python built-in function. If somehow input()
is the best name, one trick that is often recommended is to suffix the name with an underscore _
. In this case the callable would be renamed input_()
and the above code snippet would become:
def reliablebroadcast(sid, pid, N, f, leader, input, receive, send):
"""Reliable broadcast ... """"
# ...
if pid == leader:
# The leader erasure encodes the input, sending one strip to each participant
m = input_() # block until an input is received
# ...
Notice how the highlighting of the input()
function differs in each code snippet. I assume that is because Github uses MagicPython to highlight the code and MagicPython highlights Python built-in functions.
See related thread https://www.reddit.com/r/pythontips/comments/4m9wiw/avoid_overwriting_python_functions/?st=j6h17gu8&sh=f483ed6e
Also somewhat related Function and method arguments in PEP 8:
If a function argument's name clashes with a reserved keyword, it is generally better to append a single trailing underscore rather than use an abbreviation or spelling corruption. Thus class_ is better than clss. (Perhaps better is to avoid such clashes by using a synonym.)
When running the standard test:
$ docker run -e N="8" -e t="2" -e B="16" -it honeybadgerbft
I got the error:
Concensus Finished
Entering atexit()
msgCounter 9088
msgTypeCounter [[0, 0], [64, 24573], [512, 197096], [2560, 25600], [2496, 24960], [2432, 94848], [512, 20480], [512, 37376]]
Init Echo Val Aux Coin Ready Share
64 512 2560 2496 2432 512 512
24573 197096 25600 24960 94848 20480 37376
Total Message size 400296
timestampE (3, 1538373134.062513)
[3] 9 distinct tx synced and 7 tx left in the pool.
Exception KeyError: KeyError(140707557272464,) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
Process _GProcess-1:
Details:
Investigate Linking with this library
https://crysp.uwaterloo.ca/software/DKG/intro.html
Does it support the 2 kinds of threshold keys we need?
This is actually a question about protocol, not an issue.
I think I do understand why threshold signatures are needed for common coin in case of open message exchange in public network, where some malicious party can intercept, read and delay them in some smart way. But is cooperative calculation of common coin value is really needed in a network where all nodes are connected only by p2p encrypted channels? Can some kind of hash-based pseudorandom sequence (or just round robin) be used if there is no way for a third party to decrypt messages between any two given nodes unless one of them voluntarily disclose them?
I'm asking because lifting requirement for threshold crypto could simplify honeybadger a lot and make it even more attractive as a PBFT replacement in private or permissioned networks.
Need tests for individual subprotocols, like reliable broadcast, binary agreement, etc.
Ethereum blocks include a lot of additional information besides just the transactions. Namely, timestamps, votes to raise or lower the per-transaction limit, etc. In consortium applications we may wish to have new node introductions/dismissals in as well.
One way to do this would be to include estimates of metadata in the ACS agreed-upon values, concatenated with the threshold-encrypted transactions. Then the protocol would guarantee that we collect estimates from at least f+1
honest parties, in which case we could do something like take the "median" value.
In addition to the usual statement coverage,
coverage.py
also supports branch coverage measurement. Where a line in your program could jump to more than one next line,coverage.py
tracks which of those destinations are actually visited, and flags lines that haven’t visited all of their possible destinations.
-- coverage.py
docs: http://coverage.readthedocs.io/en/coverage-4.5.1/branch.html
The experiments need to be updated to match the changes made in the dev
branch.
NOTE: There's a work-in-progress branch addressing this issue: https://github.com/sbellem/HoneyBadgerBFT/tree/experiments
Not sure if this is the right place to post this, but I'm poking through the latest version of the paper at https://eprint.iacr.org/2016/199.pdf
and I am confused by the instruction in step 3 of Figure 1 (HoneyBadgerBFT) to
wait to receive at least f + 1
decryption shares for each ciphertext.
Since TPKE.Dec
holds the responsibility of identifying invalid decryption shares, I assume that up to f
of the received f + 1
shares can be invalid. f + 1
valid shares are required for decryption, so in this case decryption would fail and the honest node running the protocol won't be in agreement with others who decrypted successfully.
It seems that a fix would be either a clarification of the language to either one of the following:
2f
potentially invalid decryption sharesf
valid shares (which necessitates a TPKE.ShareValid(C, i, S_i) -> bool
)In the first case, there are a maximum of f
invalid shares, leaving you with f
valid shares to be combined with a locally generated share known to be valid for a total of f + 1
.
In the second case the f
valid shares are combined with the local share.
The second option to me seems more practical because it exits as soon as enough valid shares are collected instead of waiting unnecessarily.
tl;dr: we could cut down on the threshold signature cryptography Common Coins in the ABA protocol by using "forced" common coin values in the optimistic case.
The following suggestion comes from Christian Cachin:
one should take the approach somewhere
described by Klaus Kursawe, trying some "optimistic" randomness first:
every nodes has a seed programmed in and the first 10 random bits are taken
from a PRG on this -- unless the network knows how to interpret the PRG,
this will do the job! It's an assumption a bit like the random oracle
model for cryptography.
In other words, the common coin cryptography is intended to handle a very extreme attacker scenario, where the attacker learns the common coin value, and equivocates at least once, and manipulates the network delivery to selectively delay convergence. The suggested coin schedule is basically:
[prg(), prg(), ... prg(), coin(), coin(), ...]
, where prg()
means a coin that is random but predictable, like the hash of the current round number, and coin()
is a real threshold common coin.
Note that although this is optimism, it is optimism about adversarial network manipulation rather than optimism about timing or about a leader node like in PBFT, hence still within HoneyBadger ethos :)
Another related suggestion comes from Micali's "Byzantine Agreement, Made Trivial" paper. Although this protocol is intended for the Synchronous setting, it contains a nice insight about how to handle termination. Recall that in ABA, it can be the case that some node observes agreement on v
in round r
, but then we need to wait until the next round r'>r
where coin(r')=v
for other honest nodes to reach agreement. This is a bit inelegant since ABA instances may have to remaining lingering around even after terminating. We could "fire and forget" our shares of a few of the remaining common coins, although we wouldn't know exactly how many it will take! Micali's suggestion is to draw coins in the order:
[0, 1, coin(), 0, 1, coin(), 0, 1, coin(), ...]
Another suggestion is to have a special message, v*
, which is basically an abbreviation for BVAL(v)
and AUX({v})
for all subsequent rounds. This guarantees at most one more common coin needs to be sent along with v*
.
Another observation on the choice of coin schedule: Most of the ABA instances need to settle on 1
, in fact N-f
of them must do so before any 0
s can be input. So we can optimize for the typical case by starting with [1,1,...]
. Furthermore, in the case of a benign crash of some nodes, the corresponding ABA instances will receive input 0
for everyone. So [1,1,0,0,...]
should resolve most cases in 4 iterations.
So, I'd propose a coin schedule that looks like the following:
Option 1: [1,1,0,0,prg(),prg(),prg(),prg(),1,0,coin(),1,0,coin(),...]
Option 2: [1,1,0,0,coin(),1,0,coin(),...]
Both of these have the qualities:
0
and 1
v*
without even computing a common coin shareAlso note: the fix of adding an additional CONF
message per #59 is only necessary for those rounds involving coin()
, since the purpose of CONF
is to prevent a network manipulation adversary from predicting the coin value before influencing the estimates of the nodes, so it's redundant when the value is predictable anyway.
Right now if the tcp socket is disconnected, it's just broken. We should at least buffer and retry.
Or support multiple fallback strings.
How much buffering should we do? See #1
Right now the main guarantee we provide is to tolerate faults.... however, if a fault occurs, we'd like not just to tolerate it, but also to identify the node responsible for the fault. Let's call "fault accountability" the property that failures produce verifiable evidence incriminating the node(s) responsible for the fault.
In the asynchronous communication model, it's impossible to tell the difference between a "crash" and a long network delay. So we can't hope to provide "complete" fault accountability. However, clearly some kinds of failures, for example equivocation, have more promise. (This is related to "slashing" in the world of blockchain Proof-of-Stake protocols).
Right now, we provide only non-transferable authentication along channels (e.g., each message comes with a MAC applied in a TLS session), so very little accountability. On the other extreme, we could in principle require signatures transmitted along with each message, and could store the entire transcript of messages received for later review. This would add a lot of expensive extra overhead.
This ticket is intended to start a design discussion to explore a range of proposals in between these two extremes.
One benefit to gain from fault accountability is optimism. Consider the Common Coin protocol. Each instance of the protocol requires collecting threshold signature shares from each node. Any f+1
correct signature shares can be combined to form the complete signature. However, if any one among f+1
purported signature shares is incorrect, then combining will fail. Fortunately each signature share can be verified independently, so we can toss out the incorrect ones; however, share verification is expensive (pairing operation). We could optimistically cut costs, only verifying the individual shares if combining fails, however, this introduces a DoS vulnerability, where a Byzantine node inserts bogus signature shares to trigger this fallback computation. A solution based on fault accountability would be to have nodes provide ordinary (inexpensive, non-threshold) signatures on each signature shares. If optimistically combining fails, then we know that at least one is incorrect, and therefore we are guaranteed to obtain incriminating evidence by performing the expensive per-share check.
This example illustrates the main idea of fault accountability: instead of performing an expensive validation operation right away, first perform a quick check that guarantees the expensive operation will produce verifiable evidence of a faulty node.
To make the example above more efficient in the optimistic case, we could amortize the ordinary signatures, e.g. by signing a Merkle tree of commitments over a batch of signature share values. This discussion should go in a separate issue...
I have been puzzling over the Merkle tree re-construction done in RBC. Once a node gets N-F Echo or Val messages with the same root hash H (and the associated merkle proofs), assuming all those proofs are validated, what does re-creating the Merkle tree and comparing the root hashes get you?
If you have N-F merkle proofs that you've validated that all agree on the same root hash isn't the chance of re-assembling the shards not matching the original message slim to none? Re-hashing the merkle tree (on every peer) for each of N RBC instances gets pretty painful when N is large.
cc @madninja
From what I've understood is that before broadcast erasure coding is applied to split the message into blocks. Then the different blocks are transmitted to different nodes where the node receiving a block is responsible for it's transmission to other nodes.
What will happen if a node before/after receiving a block say "ta" fails. Who will then be responsible for transmitting the block to other nodes?
All benchmarking and efficiency analysis so far involves assuming transactions are constant size. In some applications transactions submitted by users may need to vary in size. For example, Ethereum transactions can provide data, Bitcoin transactions spend varying numbers of inputs.
The efficiency proof depends on assuming transactions can be randomly selected uniformly random. Does it hold if some transactions are large?
I would like to help in with the documentation of the code, and noticed different styles being used.
In some places it's the normal standard docstrings, as in:
def binaryagreement(sid, ...):
'''Binary consensus ...
:param sid: session identifier
'''
while in other places it seems to be the NumPy style such as in:
def reliablebroadcast(sid, pid, ...):
"""Reliable broadcast
Args
----
pid : 0 <= pid < N
"""
More generally, as far as I know there are three possible approaches:
The google and numpy styles are available via the sphinx.ext.napoleon module.
Assuming that help is welcomed, then if a style is chosen I could help with setting up the documentation boilerplate, etc such that some documentation could be hosted on readthedocs for instance, or wherever you wish.
When we were implementing the threshold decryption routines for erlang_tpke https://github.com/helium/erlang-tpke by following what the python code did, we noticed that threshold decryption seemed to succeed regardless of the inputs. We eventually re-implemented all the threshold decryption routines according to the Baek and Zhang paper and finally our property based tests started passing (we do negative testing with duplicate shares, shares generated with the wrong key and shares for the wrong message).
I don't have specific changes to suggest here, nor the time to assemble them, but I'm pretty convinced your threshold decrypt, as implemented, ends up being a no-op.
The commit where I reworked our implementation to follow the paper, not the python implementation is here:
Later commits annotate all those functions with the specific math from the paper(s).
I realize this is not intended to be a production quality implementation, but people should be aware that the threshold decryption doesn't work as advertised and they should not rely on the python implementation of it.
Thanks again for all your work and let me know if there's any more information I can provide.
Given the various specialized dependencies, especially the cryptographic ones, this issue is simply concerned with evaluating how feasible it would be to port HoneyBadgerBFT to Python 3 in a reasonable time frame.
The result of the evaluation will be documented here in this issue.
UPDATE: No more blockers. zfec
now supports Python 3 -- PR tahoe-lafs/zfec#4 was merged.
zfec
does not support Python 3 at the moment (related issue: tahoe-lafs/zfec#1)
There's a PR 😄 --> tahoe-lafs/zfec#4 !
lib | Latest Python 3 support | PyPI: release | PyPI: Py version | PyPI: Uploaded on |
---|---|---|---|---|
gevent |
3.7 | 1.3a1 | 3.7 | recently |
gmpy2 |
3.6 | 2 2.1.0a1 | 3.6 | recently |
pysocks |
3.6 | 1.6.8 | 3.6 | recently |
zfec |
3.7 | 1.5.0 | 3.7 | recently |
Charm-Crypto |
3.6 (from github) | 0.43 | 3.4 | 2015-08-14 |
ecdsa |
3.6 (from github) | 0.13 | 3.4 | 2015-02-07 |
gipc |
3.6 (from github) | 0.6.0 | 3.4 | 2015-07-22 |
pycrypto |
3.4 | 2.6.1 | 3.4 | 2014-06-20 |
Some details to watch out for.
The latest charm-crypto
cannot be installed simply with pip
as there is an error when setting config values. This can be worked around by cloning the repo and performing the configuration to create a config file and running make, and then pip installing. So not a big deal, but inconvenient.
The goal of this issue is to first list the various coding style elements for which there are multiple alternatives but yet for which some convention is sought in order to ease contributions, to make the code base uniform in style (to increase its readability) and to reduce potential unforeseen sources of disagreements regarding what could very often be considered unimportant trivial details.
Once a coding style element is identified, a convention can be agreed on.
Since this is a Python project, PEP 8 -- Style Guide for Python Code is perhaps the first thing that should be considered. Automatic checkers such as flake8
(as mentioned in #33) can help in identifying coding style "errors". Yet, certain things need to be configured: We'll try to highlight some of the most common elements.
It's best to read the section in PEP 8 on this matter. But here's the essence more or less, (quoting):
Limit all lines to a maximum of 79 characters.
For flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters.
[...]
Some teams strongly prefer a longer line length. For code maintained exclusively or primarily by a team that can reach agreement on this issue, it is okay to increase the nominal line length from 80 to 100 characters (effectively increasing the maximum length to 99 characters), provided that comments and docstrings are still wrapped at 72 characters.The Python standard library is conservative and requires limiting lines to 79 characters (and docstrings/comments to 72).
ref: https://www.python.org/dev/peps/pep-0008/#imports
The entire section should more or less be observed.
TODO: List important elements
TODO This needs to be decided
This is not mentioned in PEP 8 and there can be a different style of using imports specifically for tests. One that is worth considering is: https://pylonsproject.org/community-unit-testing-guidelines.html
TODO: explain a bit https://pylonsproject.org/community-unit-testing-guidelines.html
ref: https://www.python.org/dev/peps/pep-0008/#string-quotes
In Python, single-quoted strings and double-quoted strings are the same. This PEP does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability.
For triple-quoted strings, always use double quote characters to be consistent with the docstring convention in PEP 257.
tool: https://github.com/zheller/flake8-quotes
ref: https://www.python.org/dev/peps/pep-0257/
This can perhaps be moved to its own issue for will put here for now.
Different test frameworks have their own set of features and plugins such that they are likely to be incompatible.
TODO list some examples (e.g.: nose2 vs pytest)
TODO pros and cons of different frameworks
reddit thread: Nose alternatives - nose2, pytest or something else?
This issue proposes to set up continuous integration with a free service such as Travis CI such that upon a Pull Request the execution of a suite of tests can be triggered to make sure that existing tested functionalities are not broken.
Other things such as documentation building and code linting can be tested as well.
Tests that require different data centres spread in different geographical regions will not be considered for this issue for now.
However, performance tests that can be executed fairly easily on a service such as Travis CI will be considered when addressing this issue.
So to summarize, as a minimum the following should be tested as part of the CI phase:
flake8
)sphinx-build
The author of this issue is most familiar with Travis CI, and hence it was mentioned as a possible service. There are most probably multiple alternatives, one of which is CircleCI.
A small list of possible services can be maintained here for the time being until a decision is made, assuming that the proposed idea of setting up CI is well received.
Not enough thought has yet gone into choosing appropriate key sizes and e.g. the elliptic curves / pairings used for threshold crypto.
In the case of threshold encryption, at least, the individual instances are short-lived. However, unless keys are regenerated / redistributed (#5) occasionally, a long running instance of HoneyBadger may have long term keys.
Just poking around with Charm, I've found it difficult to get other curve choices to work. This can probably be addressed by using a different version of Charm or the PBC dependency of a different dependency.
So far the demos have only used plain unauthenticated sockets. Clearly we need TLS, with client/server authentication and self-signed certificates. This is most likely a prereq for #7
Also needs a script to generate certificates along with the other keys.
The purpose of this issue is to communicate small improvements (such as typos) to the research paper by Miller et al. The Honey Badger of BFT Protocols.
This helps to catch "style" deviations, such as improper indentation, unused imports, etc.
flake8
is commonly used for this
The goal of this issue is to setup the check in travis with "relaxed" conventions.
Conventions such as line length, single quotes vs double quotes, etc, can be established afterwards.
To diminish the risk of introducing errors at the expense of :"style", code coverage (e.g. #10) should be given priority.
For more on code quality tools see https://github.com/PyCQA.
Things to fix (full report):
qty | err code | description |
---|---|---|
8 | E111 | indentation is not a multiple of four |
1 | E114 | indentation is not a multiple of four (comment) |
1 | E116 | unexpected indentation (comment) |
18 | E201 | whitespace after '(' |
20 | E202 | whitespace before ')' |
10 | E203 | whitespace before ':' |
12 | E211 | whitespace before '[' |
15 | E221 | multiple spaces before operator |
3 | E225 | missing whitespace around operator |
1 | E227 | missing whitespace around bitwise or shift operator |
117 | E231 | missing whitespace after ':' |
11 | E261 | at least two spaces before inline comment |
3 | E262 | inline comment should start with '# ' |
93 | E265 | block comment should start with '# ' |
14 | E266 | too many leading '#' for block comment |
3 | E271 | multiple spaces after keyword |
5 | E272 | multiple spaces before keyword |
72 | E302 | expected 2 blank lines, found 1 |
6 | E303 | too many blank lines (2) |
14 | E305 | expected 2 blank lines after class or function definition, found 1 |
3 | E306 | expected 1 blank line before a nested definition, found 0 |
11 | E402 | module level import not at top of file |
38 | E501 | line too long (118 > 79 characters) |
41 | E701 | multiple statements on one line (colon) |
1 | E703 | statement ends with a semicolon |
4 | E712 | comparison to True should be 'if cond is True:' or 'if cond:' |
7 | E731 | do not assign a lambda expression, use a def |
27 | F401 | 'honeybadgerbft.crypto.threshsig.boldyreva.deserialize1' imported but unused |
1 | F403 | 'from ecdsa_ssl import *' used; unable to detect undefined names |
1 | F405 | 'KEY' may be undefined, or defined from star imports: ecdsa_ssl |
1 | F812 | list comprehension redefines 't' from line 80 |
3 | F821 | undefined name '_combine_and_verify' |
12 | F841 | local variable 'aba' is assigned to but never used |
29 | W291 | trailing whitespace |
49 | W293 | blank line contains whitespace |
7 | W391 | blank line at end of file |
1 | W604 | backticks are deprecated, use 'repr()' |
If a client creates an N-way doublespend, but each node only sees one of them, then it is possible that this will not be noticed until all of them are committed.
The simplest thing to do is just reject all but the first transaction. That is, the transaction will be "committed" in the blockchain, but discarded after processing with no effect. However, this introduces a DoS vector because double-spends use up space, but are cheap. This could be related to #8 on variable size transactions.
One approach may be to require parties to reserve some "double spend insurance", which is collateral that can be taken in compensation in the case of a double-spend.
Unfortunately requiring this much collateral could require a lot of money. Is there a tradeoff here?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.