Coder Social home page Coder Social logo

tf-encrypted / tf-encrypted Goto Github PK

View Code? Open in Web Editor NEW
1.2K 54.0 211.0 22.94 MB

A Framework for Encrypted Machine Learning in TensorFlow

Home Page: https://tf-encrypted.io/

License: Apache License 2.0

Python 88.36% Shell 0.77% Dockerfile 0.03% Makefile 1.17% C++ 9.41% Starlark 0.25%
secure-computation machine-learning tensorflow privacy cryptography deep-learning confidential-computing

tf-encrypted's Issues

PyPI Registration

Placeholder issue to register tf-encrypted with PyPI in the future for managing releases and pip installation. We've decided to hold off on this while we work on new features and figure out other UX/deployment issues first.

Explicit caching operation

The cache operation allows for reuse of values between session runs by basically storing tensors in variables behind the scene. This code was present in tensorspdz but have not been ported yet.

Note that the cache update strategy used in tensorspdz might not be fine-grained enough for future applications and should probably be reconsidered.

Allow options to be passed to GCP scripts

  • create should put $@ at the end to allow for eg passing in different machine types
  • change README to mention this in case CPU quota is reached
  • make 1 cpu default and suggest 4 cpu if allowed by account?
  • also change order in delete

Setup Continuous Integration

Tests, linters, etc., should run on every pull request before they get merged.

We could look at Circle CI or Travis CI to achieve this :)

Curated onboarding experience for contributors

We want to make it easy for anyone to get up and running with tf-encrypted and help them contribute to the project.

It'd be nice for us to setup the following:

  • Write up contributor guidelines including how to get started with development, expectations around issue management, pull requests templates, etc.
  • A code of conduct which states our expectation for how everyone who works on the project interacts with one another.
  • A slew of issues labeled as help wanted / good first issue that are clear, concise, and actionable to make it easy for someone to jump and start moving the ball forward.

Thats a few of the things we can do to improve/create an onboarding experience for contributors to the project.

I think these things would make it easier for us to do our work as well!

to_native not working in some cases

Seems like to_native on int100 tensor isn't working in some cases.

@jvmancuso ran into this issue and I did as well, not 100% sure what causes it because it generally works.

this can reproduce it, though

input = np.array((1,0,1,0, 0,1,0,1, 1,0,1,0, 0,1,0,1)).reshape(1, 4, 4, 1)
pool_input = prot.define_public_variable(input)
x = pool_input.reshape(1, 1, 4, 4)
x = x.im2col(whatever)
print(f'wow! {x}') # this will call `__repr__` which will call `to_native`

conv2d backward

Implement and test backward function in Ponds conv2d layer. This function computes the backpropagated error d_x and the weightupdate d_w.

Different tensorflow protobuf formats

There are a few different tensorflow protobuf formats. They are:

For importing we currently only support GraphDef which we're assuming it has the weights frozen directly in it. There are also cases where the GraphDef might have the weights stored in separate checkpoint files.

We need to investigate whether its better to support all three versions when converting a pre-trained model to and MPC graph or if supporting only GraphDef is sufficient.

My gut tells me we can get away with supporting only GraphDef for a while and once a MPC graph is built we can export that to a SavedModel for use with tensorflow serving (if we figure out that we can make use of tensorflow serving).

averagePooling2d SAME padding issue

Currently, AveragePooling2D works when the input is "tiled" by the kernel/stride settings, (i.e. a 2x2 kernel over a 4x4 input with stride 2), and this is true when padding is SAME or VALID. It also works for more general cases when padding is VALID. However, when padding is SAME and the pool_size requires zero padding in order to complete the last pooling patch, the behavior in TFE diverges from TF. This bug relates to how the count_include_pad argument works in pytorch.

In TFE, it's as if we've chosen count_include_pad=True, so that the zero padding gets included in our calculation of the average.
In TF, pooling with SAME padding works as if count_include_pad=False, so that the zero padding does not get included in the calculation of the average.

Ideally, we'd just switch, but our current AveragePooling2D implements its average by summing over the correct pooling sections, constructing a public/private/masked tensor from the result of that sum, and then multiplying with the inverse of pool_height*pool_width. However, this means that the zero padding (if it exists) must necessarily be included in the average.

The only way to fix it within the current paradigm (still doing a private-public multiply after the sum operation) would be to separate the summed tensor into patches where the zero padding is included, multiply those with pool_height*pool_width - number_of_padded_zeros instead, and then recombine them all into the correct output tensor with padding not included. the major problem here is that we'd likely lose any speed we gain from using im2col in the first place, and a cleaner solution would be to do everything we need straight from the output of tf.extract_image_patches.

Definitely open to suggestions here! We should consider the trade-offs of overengineering this piece for complete compatibility versus documenting the discrepancy and advising users to stick with certain padding settings when doing pooling.

Python package

Make the library available as an easy to use package.

rename 'master' node

We should consider renaming the 'master' node to 'api' or similar to avoid the master/slave dichotomy that is now considered politically incorrect.

64bit tensors with natural mod reductions

For some applications 64bit might be enough, in which case we can not only avoid the use of the CRT also potentially switch to "native modulo reductions" in the form of natural wrap-arounds as opposed to explicit % operations. This combined setting is for instance used in SecureML.

Flatten operation import conversion

To be able import a flatten operation we need to be able to run the following operations on the underlying tensors:

  • Pack
  • Reshape
  • StridedSlice
  • Shape

First steps for neural networks

Implement Keras-like layers and models for easily express (sequential) neural networks along the lines of what's done in pond.

Add additional matching example.

Implement SecureNN protocol

The 3/4-party SecureNN protocol claims good performance as well as offering additional activation functions such as ReLU without switching GC (using existing protocols that seems adaptable to the 2-party setting as well).

improve linting

We have currently satisfied the pycodestyle linter but we would like to check for more different errors.
so we are adding pyflakes.

Many other tools could also be used like flake8 or pylint.

It would be interesting to know if those tools are complementary or redundant.
My current understanding is

  • pyflakes and flake8 are redundant but they are complementary with pycodestyle and pylint .
  • pylint seems very pedantic and might be too much for a linter

Split Pond conv2d into im2col and dot product

In the current implementation of conv2d in pond the im2col operation is not a node in our graph. When we split conv2d into two separate nodes, we can re-use the output of im2col in the backward phase, which is computationally more efficient.

More robust conv2d

The current convolution implementation isn't as robust as the built in tensorflow one. If we're going to be able to import any tensorflow graph we'll need to match the robustness of that implementation.

Two main issues I've come across:

  • tensorflow takes 2 or 4 strides for a 2d convolution. I think right now we only support 1 stride
  • tensorflow can support both NCHW and NHWC format whereas we only support NCHW

There are probably other edge cases and at some point we should probably test all the different edge cases.

Setup linting

We should add a linter to the code base so our code is conformant!

One session run, multiple predictions

We are currently using TF session.run() for each and every prediction. this is known to be sub-efficient.

In our case, it's a killer because TF spend sometimes optimizing the graph for each run and this time can be pretty big.

In production, served models should be loaded once and kept hot, listening for the next input to come. The goal here is to find an "in-between" solution to avoid tf.serving for now and yet be efficient with a python flask server.

Connection attempt never times out

If for some reason we can't connect to one or more nodes, the program just sits stuck and never times out ever. We should add some reasonable defaults (say 30s)?

How does tf-encrypted integrate with TF Serving?

Tensorflow Serving is the defacto serving environment for predicting on a model trained in tensorflow.

We should figure out how this framework could/would integrate with that ecosystem component.

New Benchmark using tfe

Goals:

  • Benchmarking tfe computation time locally with 3 players using a 5 layers conv2d + sigmoid models
  • Benchmarking tfe computation time remotely with 3 players on AWS around North America with 5 layers conv2d + sigmoid

Document project status

Right now the project is pre-alpha, it's pretty much a proof of concept! We don't have any guarantees around stability and it's far from a polished or working product.

We should advertise this fact in the README.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.