Coder Social home page Coder Social logo

lattice's Introduction

TensorFlow Lattice

TensorFlow Lattice is a library that implements constrained and interpretable lattice based models. It is an implementation of Monotonic Calibrated Interpolated Look-Up Tables in TensorFlow.

The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints. This is done using a collection of Keras layers that can satisfy constraints such as monotonicity, convexity and pairwise trust:

  • PWLCalibration: piecewise linear calibration of signals.
  • CategoricalCalibration: mapping of categorical inputs into real values.
  • Lattice: interpolated look-up table implementation.
  • Linear: linear function with monotonicity and norm constraints.

The library also provides easy to setup canned estimators for common use cases:

  • Calibrated Linear
  • Calibrated Lattice
  • Random Tiny Lattices (RTL)
  • Crystals

With TF Lattice you can use domain knowledge to better extrapolate to the parts of the input space not covered by the training dataset. This helps avoid unexpected model behaviour when the serving distribution is different from the training distribution.

You can install our prebuilt pip package using

pip install tensorflow-lattice

lattice's People

Contributors

khgkim avatar mmilanifard avatar si-you avatar wbakst avatar zhangxiangnick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lattice's Issues

Academic Citations

Would it be possible to include a blurb about citing tensorflow lattice for academic publications?

Calibrations are not Monotonic

I'm trying to train a calibration to incorporate into tf ranking, similar to #35

My code for the calibration looks like this:

    kp_inits = {
        'num_keypoints':    101,
        'input_min':        0.0,
        'input_max':        25.0,
        'output_min':       0.0,
        'output_max':       1.0
    }
    sample_input = [
        tf.compat.v1.layers.flatten(group_features[name])
          for name in ['1','2', '3', '4', '5', '6']
      ]
    sample_layer = tf.concat(sample_input, 1)
    calib_layer = tfl.layers.PWLCalibration(
        input_keypoints=np.linspace(
              kp_inits['input_min'],
              kp_inits['input_max'],
              num=kp_inits['num_keypoints'],
              dtype=np.float32),
          units=len(sample_input),
          output_min=kp_inits['output_min'],
          output_max=kp_inits['output_max'],
          monotonicity='increasing',
          name="sample_calib")(sample_layer)

The calibrations learned by this model are shown below. The bounds are being respected, but the calibrations do not appear monotonic. Any insights on what could be missing?

tensor_name:  groupwise_dnn_v2/group_score/final_calib/pwl_calibration_kernel
array([[0.00226, 0.00571, 0.00000, 0.00020, 0.00000, 0.00122],
       [0.00093, 0.00571, 0.00000, 0.00030, 0.00151, 0.00084],
       [0.00060, 0.00571, 0.00479, 0.00057, 0.00214, 0.00158],
       [0.00058, 0.00571, 0.06334, 0.00070, 0.00306, 0.00708],
       [0.00049, 0.00571, 0.06504, 0.00077, 0.00456, 0.00930],
       [0.00034, 0.00571, 0.04417, 0.00113, 0.00586, 0.00959],
       [0.00027, 0.00571, 0.02631, 0.00113, 0.01959, 0.00967],
       [0.00022, 0.00571, 0.02858, 0.00125, 0.02807, 0.00976],
       [0.00015, 0.00571, 0.01595, 0.00147, 0.02679, 0.00968],
       [0.00006, 0.00571, 0.02027, 0.00158, 0.02347, 0.00969],
       [0.00006, 0.00571, 0.02155, 0.00156, 0.01759, 0.00968],
       [0.00007, 0.00571, 0.02454, 0.00170, 0.01365, 0.00969],
       [0.00027, 0.00571, 0.02354, 0.00218, 0.01264, 0.00967],
       [0.00072, 0.00571, 0.01969, 0.00236, 0.00865, 0.00966],
       [0.00072, 0.00571, 0.01615, 0.00239, 0.00707, 0.00962],
       [0.00073, 0.00571, 0.01533, 0.00250, 0.00666, 0.00966],
       [0.00074, 0.00569, 0.01530, 0.00243, 0.01226, 0.00971],
       [0.00075, 0.00562, 0.01565, 0.00255, 0.01212, 0.00972],
       [0.00087, 0.00562, 0.01165, 0.00242, 0.01296, 0.00973],
       [0.00077, 0.00562, 0.00930, 0.00264, 0.01398, 0.00981],
       [0.00065, 0.00562, 0.00851, 0.00263, 0.01098, 0.00998],
       [0.00055, 0.00562, 0.00652, 0.01464, 0.00634, 0.01010],
       [0.00418, 0.00562, 0.00747, 0.03795, 0.00606, 0.01015],
       [0.09591, 0.00562, 0.00930, 0.04783, 0.00609, 0.01021],
       [0.08237, 0.00562, 0.01206, 0.05157, 0.00618, 0.01027],
       [0.06946, 0.00562, 0.01499, 0.05142, 0.00673, 0.01040],
       [0.04949, 0.00563, 0.01555, 0.04344, 0.00697, 0.01057],
       [0.03420, 0.00563, 0.01226, 0.03337, 0.00713, 0.01063],
       [0.01487, 0.00563, 0.01086, 0.02088, 0.00765, 0.01063],
       [0.01598, 0.00563, 0.00994, 0.01747, 0.00851, 0.01065],
       [0.01073, 0.00560, 0.00936, 0.01679, 0.00775, 0.01066],
       [0.03031, 0.00569, 0.00805, 0.01175, 0.00889, 0.01059],
       [0.03343, 0.00575, 0.00763, 0.01121, 0.01002, 0.01051],
       [0.02606, 0.00581, 0.00820, 0.02078, 0.00908, 0.01043],
       [0.02077, 0.00583, 0.00856, 0.02880, 0.00955, 0.01044],
       [0.01382, 0.00591, 0.01037, 0.03544, 0.00865, 0.01046],
       [0.03073, 0.00594, 0.00828, 0.03041, 0.00840, 0.01050],
       [0.03391, 0.00594, 0.00746, 0.02282, 0.00790, 0.01044],
       [0.01764, 0.00594, 0.00724, 0.01575, 0.00704, 0.01045],
       [0.00958, 0.00594, 0.00875, 0.01957, 0.00712, 0.01046],
       [0.01312, 0.00594, 0.00840, 0.02152, 0.00719, 0.01043],
       [0.00948, 0.00594, 0.00937, 0.02334, 0.00870, 0.01038],
       [0.00919, 0.00594, 0.00861, 0.02835, 0.01373, 0.01038],
       [0.00858, 0.00594, 0.00837, 0.03308, 0.01630, 0.01039],
       [0.00821, 0.00594, 0.00813, 0.03074, 0.01830, 0.01035],
       [0.00769, 0.00594, 0.00778, 0.02386, 0.01803, 0.01038],
       [0.00692, 0.00594, 0.00723, 0.02606, 0.01497, 0.01042],
       [0.00617, 0.00594, 0.00752, 0.02047, 0.01534, 0.01046],
       [0.00491, 0.00594, 0.00549, 0.02386, 0.01614, 0.01054],
       [0.00356, 0.00594, 0.00430, 0.02269, 0.01515, 0.01060],
       [0.00272, 0.00594, 0.00469, 0.02590, 0.01450, 0.01066],
       [0.00229, 0.00594, 0.00458, 0.02557, 0.01211, 0.01068],
       [0.01078, 0.00594, 0.00434, 0.01946, 0.01062, 0.01071],
       [0.01128, 0.00594, 0.00421, 0.02055, 0.00987, 0.01077],
       [0.01048, 0.00594, 0.00449, 0.01359, 0.01097, 0.01075],
       [0.00929, 0.00594, 0.00493, 0.00661, 0.01122, 0.01074],
       [0.00908, 0.00594, 0.00520, 0.00450, 0.01201, 0.01077],
       [0.00915, 0.00594, 0.00539, 0.00457, 0.01152, 0.01070],
       [0.00865, 0.00594, 0.00550, 0.00395, 0.01111, 0.01106],
       [0.00780, 0.00594, 0.00559, 0.00321, 0.01133, 0.01122],
       [0.00660, 0.00594, 0.00597, 0.00328, 0.01170, 0.01175],
       [0.00500, 0.00594, 0.00572, 0.00270, 0.01158, 0.01207],
       [0.00403, 0.00594, 0.00572, 0.00256, 0.01027, 0.01225],
       [0.00307, 0.00594, 0.00582, 0.00232, 0.00874, 0.01247],
       [0.00296, 0.00594, 0.00540, 0.00255, 0.00742, 0.01260],
       [0.00251, 0.00594, 0.00485, 0.00243, 0.00769, 0.01298],
       [0.00195, 0.00594, 0.00418, 0.00194, 0.00740, 0.01293],
       [0.00176, 0.00594, 0.00366, 0.00158, 0.00730, 0.01336],
       [0.00180, 0.00594, 0.00359, 0.00158, 0.00700, 0.01317],
       [0.00117, 0.00594, 0.00410, 0.00157, 0.00688, 0.01324],
       [0.00110, 0.00594, 0.00426, 0.00138, 0.00663, 0.01354],
       [0.00103, 0.00594, 0.00443, 0.00102, 0.00638, 0.01395],
       [0.00091, 0.00594, 0.00450, 0.00076, 0.00621, 0.01255],
       [0.00082, 0.00594, 0.00471, 0.00051, 0.00602, 0.01161],
       [0.00078, 0.00594, 0.00476, 0.00048, 0.00611, 0.01280],
       [0.00064, 0.00594, 0.00502, 0.00042, 0.00627, 0.01283],
       [0.00061, 0.00594, 0.00540, 0.00068, 0.00639, 0.01050],
       [0.00055, 0.00594, 0.00563, 0.00036, 0.00642, 0.00967],
       [0.00049, 0.00594, 0.00592, 0.00014, 0.00462, 0.00966],
       [0.00041, 0.00594, 0.00623, 0.00008, 0.00465, 0.00946],
       [0.00027, 0.00594, 0.00633, 0.00000, 0.00444, 0.00900],
       [0.00017, 0.00594, 0.00646, 0.00000, 0.00494, 0.00867],
       [0.00012, 0.00594, 0.00646, 0.00000, 0.00551, 0.00840],
       [0.00006, 0.00594, 0.00628, 0.00000, 0.00601, 0.00850],
       [0.00000, 0.00594, 0.00638, 0.00000, 0.00579, 0.00836],
       [0.00000, 0.00594, 0.00674, 0.00000, 0.00466, 0.00855],
       [0.00000, 0.00594, 0.00692, 0.00000, 0.00417, 0.00834],
       [0.00000, 0.00594, 0.00722, 0.00000, 0.00405, 0.00664],
       [0.00000, 0.00594, 0.00784, 0.00000, 0.00371, 0.00354],
       [0.00000, 0.00594, 0.00866, 0.00000, 0.00367, 0.00315],
       [0.00000, 0.00594, 0.00929, 0.00000, 0.00349, 0.00210],
       [0.00000, 0.00594, 0.01070, 0.00000, 0.00290, 0.00230],
       [0.00000, 0.00594, 0.01128, 0.00000, 0.00200, 0.00137],
       [0.00000, 0.00594, 0.01080, 0.00000, 0.00205, 0.00143],
       [0.00000, 0.00594, 0.01057, 0.00000, 0.00189, 0.00127],
       [0.00000, 0.00594, 0.00886, 0.00000, 0.00189, 0.00000],
       [0.00000, 0.00594, 0.00513, 0.00000, 0.00190, 0.00000],
       [0.00000, 0.00594, 0.00323, 0.00000, 0.00092, 0.00000],
       [0.00000, 0.00594, 0.00260, 0.00000, 0.00074, 0.00038],
       [0.00000, 0.00594, 0.00145, 0.00000, 0.00047, 0.00065],
       [0.00000, 0.00594, 0.00030, 0.00000, 0.00000, 0.00032]],
      dtype=float32)

Unable to import in Jupyter Notebook

Used pip3 to install on ubuntu machine. Seemed to install fine (tried both globally and a specific env). Am unable to import tensorflow_lattice as tfl in jupyter notebook. I get the error:

No module named 'tensorflow_lattice'

Any ideas where I went wrong? Thanks!

When to increase the lattice_sizes?

I am using TF Lattice for three weeks but I am not quite sure when to increase the "lattice_sizes" in a Lattice layer.

I looked up the TF Lattice official tutorials, but I am somewhat lost to decide the size of lattice.

May I ask you to give me a suggestion to decide the size of lattice?

I do understand a 'd'-dimension lattice is required for 'd' features.

Again, thank you for this amazing library :)

Error after change lattice size

Hi, I'm using CannedRegressor with CalibratedLatticeConfig, everything is fine when the lattice_size in FeatureConfig is set to 2, but after I change lattice_size to 4, I got the following error
`Traceback (most recent call last):
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1812, in _create_c_op
c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid value in tensor used for shape: -2147483648

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data1/luming.slm/cardinality/lattice_regression/lattice_regression.py", line 457, in
estimator.train(input_fn=train_input_fn)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 349, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1175, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1204, in _train_model_default
self.config)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1163, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/estimators.py", line 729, in calibrated_lattice_model_fn
dtype=dtype)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/estimators.py", line 652, in _calibrated_lattice_model_fn
model = premade.CalibratedLattice(model_config=model_config, dtype=dtype)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/premade.py", line 235, in init
dtype=dtype)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/premade_lib.py", line 579, in build_lattice_layer
lattice_input)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in call
self._maybe_build(inputs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
self.build(input_shapes)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/lattice_layer.py", line 399, in build
dtype=self.dtype)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 448, in add_weight
caching_device=caching_device)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 750, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 145, in make_variable
shape=variable_shape if variable_shape else None)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 260, in call
return cls._variable_v1_call(*args, **kwargs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
shape=shape)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 199, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2597, in default_variable_creator
shape=shape)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 264, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 1518, in init
distribute_strategy=distribute_strategy)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/lattice_layer.py", line 711, in call
dtype=dtype)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/lattice_lib.py", line 492, in linear_initializer
weights = batch_outer_operation(one_d_weights, operation=tf.add)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow_lattice/python/lattice_lib.py", line 333, in batch_outer_operation
result = tf.reshape(result, shape=new_shape)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8234, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 744, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3485, in _create_op_internal
op_def=op_def)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1975, in init
control_input_ops, op_def)
File "/data1/luming.slm/anaconda3/envs/tfl/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1815, in _create_c_op
raise ValueError(str(e))
ValueError: Invalid value in tensor used for shape: -2147483648`

It's really strange. And I got the same error message when I set the lattice size as 2, but add several more common features (that have no monotonic constraints). Any idea about this? Thanks.

Problem with tensorflow-lattice execution

After running

import tensorflow as tf
import tensorflow_lattice as tfl

in a Jupyter notebook I get the following error:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-4-4c496d927cec> in <module>()
     19 from sklearn.metrics import mean_absolute_error as mae
     20 import tensorflow as tf
---> 21 import tensorflow_lattice as tfl
     22 
     23 

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/__init__.py in <module>()
     31 # Import all modules here, but only import functions and classes that are
     32 # more likely to be used directly by users.
---> 33 from tensorflow_lattice.python.estimators.calibrated import input_calibration_layer_from_hparams
     34 from tensorflow_lattice.python.estimators.calibrated_etl import calibrated_etl_classifier
     35 from tensorflow_lattice.python.estimators.calibrated_etl import calibrated_etl_regressor

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/estimators/calibrated.py in <module>()
     21 from tensorflow_lattice.python.estimators import hparams as tf_lattice_hparams
     22 from tensorflow_lattice.python.lib import keypoints_initialization
---> 23 from tensorflow_lattice.python.lib import pwl_calibration_layers
     24 from tensorflow_lattice.python.lib import tools
     25 

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/lib/pwl_calibration_layers.py in <module>()
     29 from tensorflow_lattice.python.lib import regularizers
     30 from tensorflow_lattice.python.lib import tools
---> 31 from tensorflow_lattice.python.ops import pwl_calibration_ops
     32 
     33 from tensorflow.python.framework import constant_op

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/ops/pwl_calibration_ops.py in <module>()
     27 """
     28 # pylint: disable=unused-import
---> 29 from tensorflow_lattice.python.ops.gen_monotonic_projection import monotonic_projection
     30 from tensorflow_lattice.python.ops.gen_pwl_indexing_calibrator import pwl_indexing_calibrator
     31 from tensorflow_lattice.python.ops.gen_pwl_indexing_calibrator import pwl_indexing_calibrator_gradient

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/ops/gen_monotonic_projection.py in <module>()
      7 import collections as _collections
      8 
----> 9 from tensorflow.python.eager import execute as _execute
     10 from tensorflow.python.eager import context as _context
     11 from tensorflow.python.eager import core as _core

ImportError: No module named 'tensorflow.python.eager'

Installing tf-nightly using pip install tf-nightly does not help.

Use Go binding to serve lattice

I was trying to use TF's Go binding to serve Lattice model. My model loading go code is like:
model, err = tf.LoadSavedModel(localModelDir, []string{"serve"}, nil)
and I got following error:

Op type not registered 'PwlIndexingCalibrator' in binary running on myapp-5b4666d5f6-nmq7x. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

I realize there is a solution for tf-serving #19

I'm wondering if there is a similar solution for go binding?

ability to extract the fitted polynomial/spline

I was reading the paper which the implementation of this library is based on. Quoting from it:

In this paper we extend lattice regression, which is a spline method with fixed knots on a regular grid and a linear kernel (Garcia et al., 2012), to be monotonic

My question is whether it is possible to obtain the polynomial explicitly in order to use in a gradient-based optimization procedure. I can't seem to find any such interface that allows this to happen.

Background: I have a number of calibrated lattice regressors and I need to jointly optimize them for the variable which the models are monotonic for.

Multi-unit calibrator vs separation of calibrators gives different results

We tried two ways of using calibrators which should be equivalent theoretically but are giving us different results. Method 1 gives us better results (the multi-unit calibrator) than Method 2 even though we use the same parameters when we separate the calibrators and combine them. Is there some issue in separating the calibrators?
Method 1

 feature_input = [
        tf.compat.v1.layers.flatten(group_features[name])
          for name in ['1','2']
      ]
    feature_layer = tf.concat(feature_input, 1)
   
    with tf.compat.v1.variable_scope('lattice1_scope'):
      
      feature_calib_layer = tfl.layers.PWLCalibration(
          input_keypoints=np.linspace(
              input_min,
              input_max,
              num=num_key_points,
              dtype=np.float32),
          units=len(feature_input),
          clamp_min=True,
                clamp_max=True,
          output_min,
          output_max,
          monotonicity='increasing',
          name='feature_calib'
      )(feature_layer)
      
      feature_lattice = tfl.layers.Lattice(
        lattice_sizes=[2]*(len(feature_input)),
        monotonicities=['increasing']*(len(feature_input)),
        output_min=0.0,
        output_max=1.0,
        name='feature_lattice'
      )(feature_calib_layer)

Method 2

feature1_input = [
        tf.compat.v1.layers.flatten(group_features[name])
          for name in ['1']
      ]
    feature1_layer = tf.concat(feature1_input, 1)
    feature2_input = [
        tf.compat.v1.layers.flatten(group_features[name])
          for name in ['2']
      ]
    feature2_layer = tf.concat(feature2_input, 1)
    with tf.compat.v1.variable_scope('lattice1_scope'):
      feature_lattice_input = []
      feature1_calib_layer = tfl.layers.PWLCalibration(
          input_keypoints=np.linspace(
              input_min,
              input_max,
              num=num_keypoints,
              dtype=np.float32),
          units=len(feature1_input),
          clamp_min=True,
                clamp_max=True,
          output_min,
          output_max,
          monotonicity='increasing',
          name='feature1_calib'
      )(feature1_layer)
     feature2_calib_layer = tfl.layers.PWLCalibration(
          input_keypoints=np.linspace(
              input_min,
              input_max,
              num=num_keypoints,
              dtype=np.float32),
          units=len(feature2_input),
          clamp_min=True,
                clamp_max=True,
          output_min,
          output_max,
          monotonicity='increasing',
          name='feature2_calib'
      )(feature2_layer)
      
      feature_input.append(feature1_layer)
      feature_input.append(feature2_layer)
      feature_lattice = tfl.layers.Lattice(
        lattice_sizes=[2]*(len(feature1_input)+len(feature2_input)),
        monotonicities=['increasing']*(len(feature1_input)+len(feature2_input)),
        output_min=0.0,
        output_max=1.0,
        name='feature_lattice'
      )(keras.layers.concatenate(feature_input, axis=1))

Error in setting num_keypoints to 0

I am playing with the original lattice model (calibrated_lattice_classifier) included in uci_census.py.
The program crashes when setting the num_keypoints to 0, with the following error:

ValueError                                Traceback (most recent call last)
<ipython-input-55-3f6eefdaee94> in <module>()
     48                 print("start_time: " + str(start_time))
     49 
---> 50                 train_evaluation, test_evaluation = main(estimator)
     51 
     52                 elapsed_time = time.time() - start_time

<ipython-input-29-390452f3449e> in main(estimator)
     26 def main(estimator):
     27     if FLAGS.run == "train":
---> 28         train_evaluation, test_evaluation = train(estimator)
     29 
     30     elif FLAGS.run == "evaluate":

<ipython-input-28-550555971d85> in train(estimator)
     40             epochs_trained += epochs
     41             estimator.train(input_fn=get_train_input_fn(
---> 42                 batch_size=FLAGS.batch_size, num_epochs=epochs, shuffle=True
     43             ))
     44             print("Trained for {} epochs, total so far {}:".format(

/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
    312 
    313     saving_listeners = _check_listeners_type(saving_listeners)
--> 314     loss = self._train_model(input_fn, hooks, saving_listeners)
    315     logging.info('Loss for final step: %s.', loss)
    316     return self

/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in _train_model(self, input_fn, hooks, saving_listeners)
    741       worker_hooks.extend(input_hooks)
    742       estimator_spec = self._call_model_fn(
--> 743           features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
    744       # Check if the user created a loss summary, and add one if they didn't.
    745       # We assume here that the summary is called 'loss'. If it is not, we will

/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in _call_model_fn(self, features, labels, mode, config)
    723     if 'config' in model_fn_args:
    724       kwargs['config'] = config
--> 725     model_fn_results = self._model_fn(features=features, **kwargs)
    726 
    727     if not isinstance(model_fn_results, model_fn_lib.EstimatorSpec):

/usr/local/lib/python2.7/site-packages/tensorflow_lattice/python/estimators/calibrated.pyc in model_fn(features, labels, mode, config)
    549                    keypoints_initializers=kp_init_explicit,
    550                    name=_SCOPE_INPUT_CALIBRATION,
--> 551                    dtype=self._dtype))
    552           (total_prediction, prediction_projections,
    553            prediction_regularization) = self.prediction_builder(

/usr/local/lib/python2.7/site-packages/tensorflow_lattice/python/estimators/calibrated.pyc in input_calibration_layer_from_hparams(columns_to_tensors, feature_columns, hparams, quantiles_dir, keypoints_initializers, name, dtype)
    289         l2_reg=calibration_l2_regs,
    290         l1_laplacian_reg=calibration_l1_laplacian_regs,
--> 291         l2_laplacian_reg=calibration_l2_laplacian_regs)
    292 
    293 

/usr/local/lib/python2.7/site-packages/tensorflow_lattice/python/lib/pwl_calibration_layers.pyc in input_calibration_layer(columns_to_tensors, num_keypoints, feature_columns, keypoints_initializers, keypoints_initializer_fns, bound, monotonic, missing_input_values, missing_output_values, l1_reg, l2_reg, l1_laplacian_reg, l2_laplacian_reg, dtype)
    409     monotonic = tools.cast_to_dict(monotonic, feature_names, 'monotonic')
    410 #    import ipdb; ipdb.set_trace()
--> 411 #    keypoints_initializers = tools.cast_to_dict(
    412 #        keypoints_initializers, feature_names, 'keypoints_initializers')
    413     keypoints_initializers = {}

/usr/local/lib/python2.7/site-packages/tensorflow_lattice/python/lib/tools.pyc in cast_to_dict(v, feature_names, param_name)
     75           raise ValueError(
     76               'Dict given for %s does not contain definition for feature '
---> 77               '"%s"' % (param_name, feature_name))
     78     return v_copy
     79   return {feature_name: v for feature_name in feature_names}

ValueError: Dict given for keypoints_initializers does not contain definition for feature "age"

The problem seems to be related to this line keypoints_initializer = tools.cast_to_dict(keypoints_initializer, feature_names, 'keypoints_initializer') in the pwl_calibration_layers.py file.
When num_keypoints is 0, the keypoints_initializer dict is empty, which gives rise to the error of the cast_to_dict function.
After I manually comment that line and set the keypoints_initializer dict to empty, the program runs successfully.
Any help is much appreciated! ❤️

Linear Embedding Layer and Meta Learning?

First, thank you for sharing this amazing library!

I have two questions regarding Tensorflow Lattice.

After I read the "Deep Lattice Networks and Partial Monotonic Functions" paper, I am trying to implement the deep lattice network as the paper introduced, but I wonder "tfl.layers.Linear" is equivalent to "Linear Embedding Layer", which was mentioned in the paper.

And the last question is I wonder this network can be used in Meta Learning as well.

Thank you 😄

error when running the example.

Thank you very much for the doc and codes. I have this error when running the example, uci_census.py.
python uci_census.py --run=train --model_type=calibrated_linear --output_dir=. --quantiles_dir=. --train_epochs=600 --batch_size=1000 --hparams=learning_rate=1e-2

And I got this error:
tensorflow.python.framework.errors_impl.NotFoundError: ./quantiles/age.txt
I made a dir quantiles but I can't figure out what is the age.txt coming from.

I placed the adult.data/test in the same dir as the .py file and changed the default train/test path.

Thank you very much!

uci_census.py with calibrated_dnn fails

Hi, first of all I would like to say that TF Lattice is amazing, thanks!!!.

I've tried to run uci_census.py and it works perfectly with all types of models, but 'calibrated_dnn'.

The error happened at line 420:
(output, _, _, regularization) = tfl.input_calibration_layer_from_hparams(
features, feature_columns, hparams, quantiles_dir)

I inspected the function tfl.input_calibration_layer_from_hparams and it expects only columns_to_tensor and hparams, but the line 420 provides 3 parameters (features, feature_columns, hparams)

I used pip and by default it installs tensorflow==1.11 and tensorflow_lattice==0.98.

Thanks

Train calibrator/lattice jointly with DNN with monotonicity

This may be related to the closed issue #23 but may need an update.
I wonder if the calibration or lattice layers can be used as the upper layer (closer to output) jointly with the DNN to ensure monotonicity. At the end of this tutorial "Other potential use cases...", it mentioned a few possibilities to integrate calibration or lattice layer with other types of networks. Are there any examples out there with guaranteed monotonicity?
Alternatively, is it possible to use lattice to somehow constrain or supervise the training of traditional DNN layers such that the outputs are monotonic? This would be in contrast to adding additional post-processing layers at the output. Thanks for your reply.

calibration of lattice output

We need to train layers of lattices, and want to calibrate outputs of lattices.
I set it up to have bounds on the lattice output ([0:1]) and then the output is setup with calibration in that range with uniform keypoints.

The issue is that the outputs of the lattice drift outside the bounds while training and the projection ops clip the output. Eventually the output of the lattice covers a very small range of values and the output calibration isn't useful.

I tried an un bounded lattice output and passing it thru a sigmoid to get it to the 0:1 range, which seems to work better, but that doesn't use the full 0:1 range of the calibration unless I carefully scale the lattice output.

I finally had better success by modifying the projection ops in lattice_layer to linearly scale the outputs to [min:max] instead of clipping the output.

Unable to execute example program

I have installed tensorflow-lattice using pip 9.0.1 in Python 3.5.2 on Ubuntu 16.04 LTS. Tensorflow version is 1.3.1. For testing purpose I tried to execute example program

import tensorflow as tf
import tensorflow_lattice as tfl

x = tf.placeholder(tf.float32, shape=(None, 2))
(y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  print(sess.run(y, feed_dict={x: [[0.0, 0.0]]}))

which resulted in error. Here is stack trace from Jupyter notebook

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-444d1bededea> in <module>()
----> 1 (y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))

/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/lib/lattice_layers.py in lattice_layer(input_tensor, lattice_sizes, is_monotone, output_dim, interpolation_type, lattice_initializer, l1_reg, l2_reg, l1_torsion_reg, l2_torsion_reg, l1_laplacian_reg, l2_laplacian_reg)
    193   parameter_tensor = variable_scope.get_variable(
    194       interpolation_type + '_lattice_parameters',
--> 195       initializer=lattice_initializer)
    196 
    197   output_tensor = lattice_ops.lattice(

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
   1063       collections=collections, caching_device=caching_device,
   1064       partitioner=partitioner, validate_shape=validate_shape,
-> 1065       use_resource=use_resource, custom_getter=custom_getter)
   1066 get_variable_or_local_docstring = (
   1067     """%s

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    960           collections=collections, caching_device=caching_device,
    961           partitioner=partitioner, validate_shape=validate_shape,
--> 962           use_resource=use_resource, custom_getter=custom_getter)
    963 
    964   def _get_partitioned_variable(self,

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    365           reuse=reuse, trainable=trainable, collections=collections,
    366           caching_device=caching_device, partitioner=partitioner,
--> 367           validate_shape=validate_shape, use_resource=use_resource)
    368 
    369   def _get_partitioned_variable(

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
    350           trainable=trainable, collections=collections,
    351           caching_device=caching_device, validate_shape=validate_shape,
--> 352           use_resource=use_resource)
    353 
    354     if custom_getter is not None:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
    662                          " Did you mean to set reuse=True in VarScope? "
    663                          "Originally defined at:\n\n%s" % (
--> 664                              name, "".join(traceback.format_list(tb))))
    665       found_var = self._vars[name]
    666       if not shape.is_compatible_with(found_var.get_shape()):

ValueError: Variable hypercube_lattice_parameters already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

  File "/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/lib/lattice_layers.py", line 195, in lattice_layer
    initializer=lattice_initializer)
  File "<ipython-input-1-e860b057ec64>", line 5, in <module>
    (y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))
  File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2847, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)

Saving and loading model via keras interface

Hi,

I have constructed a simple sequential model via TFL and although saving via model.save('file_name.h5') works fine, it is impossible to load with keras.models.load_model('h.h5') yielding "ValueError: Unknown layer: ParallelCombination".

example:

model = keras.models.Sequential()
model.add(combined_calibrators)
model.add(tf.keras.layers.RepeatVector(2))
model.add(lattice)
model.compile(loss=keras.losses.mean_squared_error,
            optimizer=keras.optimizers.Adam(learning_rate=0.001))
model.save('h.h5')

m2 = keras.models.load_model('h.h5')

Lattice Kernel Tensors not Monotonic

Referencing the issue here - #49

What is encoded in the lattice kernel? The lattice kernels for most of my signals look monotonic and it appears as though like tf.cumsum is not needed. However, for one set of signals, the lattice kernel looks like this:

tensor_name:  groupwise_dnn_v2/group_score/another_sample_layer/lattice_kernel
array([[0.00000],
       [0.00000],
       [0.00000],
       [0.63704],
       [0.00000],
       [0.42078],
       [0.99970],
       [1.00000],
       [0.45677],
       [0.59488],
       [1.00000],
       [1.00000],
       [0.66063],
       [0.99998],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63867],
       [0.00000],
       [0.42078],
       [0.99982],
       [1.00000],
       [0.45697],
       [0.61441],
       [1.00000],
       [1.00000],
       [0.67717],
       [0.99998],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63715],
       [0.00000],
       [0.42114],
       [1.00000],
       [1.00000],
       [0.45767],
       [0.59587],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [0.00000],
       [0.00000],
       [0.00000],
       [0.63878],
       [0.00000],
       [0.42114],
       [1.00000],
       [1.00000],
       [0.45767],
       [0.61565],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000],
       [1.00000]], dtype=float32)

My code for the lattice is:

tfl.layers.Lattice(
        lattice_sizes=[2]*len(sample_input),
        monotonicities=['increasing']*len(sample_input),
        output_min=0.0,
        output_max=1.0,
        name='sample_lattice'
      )(sample_calib_layer)

Grammar error

lattice/docs/tutorials/shape_constraints.ipynb

under Shape Constraints in Diminishing Returns

It should read:
Diminishing returns means that the marginal gain of increasing a certain feature value will decrease as we increase the value.

Not this:
Diminishing returns means that the marginal gain of increasing certain a feature value will decrease as we increase the value.

It's not a big deal but i was really confused for a minute trying to understand what it meant.
Also is there a better place to bring up minor things like this?

Cannot save keras model with tensorflow lattice layers

Saving the model having keras tfl layers creates the following problem.

`/usr/local/lib/python3.6/dist-packages/h5py/_hl/group.py in setitem(self, name, obj)
371
372 if isinstance(obj, HLObject):
--> 373 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
374
375 elif isinstance(obj, SoftLink):

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5o.pyx in h5py.h5o.link()

RuntimeError: Unable to create link (name already exists)`

The error is reproduced in the colab example here.
https://colab.research.google.com/drive/1tknejj9CtM27bHGktsZSTnvLvH3eCgG8

calibrated dnn model with monotonicity

unfortunately only example code that uses calibrated dnn model doesn't have any monotonicity constraint defined for any of the features. Therefore the code ignores the projections that is returned by input_calibration_layer_from_hparams here. I struggle to figure out how to use the projections ops in a scenario where the modeled function is supposed to be partially monotonic at least with regards to one feature. The only relevant documentation that can help me out was the comment on the projection ops here. However, I am quite puzzled by what is meant by

that must be applied at each step (or every so many steps)

I would be glad if someone could clarify how the projection ops should be applied within the net to ensure partial monotonicity.

Thanks in advance.

TensorFlow 2.0 plan

Is there any plan for tensorflow 2.0 release?
I heard that the eager mode will be default in TF 2.0, but not sure TF lattice is ready for it.

Thank you!

Error in using the new version of lattice

Hi,

I was able to work with Lattice v 0.9.9, but after upgrading the code with v 2.0 a few days ago, I am not able to run my code anymore. I keep receiving the following error when I import tensorflow_lattice:
"AttributeError: module 'tensorflow.python.keras.utils.losses_utils' has no attribute 'ReductionV2'"

Can you please let me know how to resolve this issue?

Thanks.

experiment result explanation in tutorial

Hi, I'm looking at the shape constraints tutorial, and the results for GBDT and DNN are listed in the tutorial as follows:

  • GBT Validation AUC: 0.7248634099960327
  • GBT Test AUC: 0.6980501413345337
  • DNN Validation AUC: 0.7518489956855774
  • DNN Testing AUC: 0.745200514793396

After the experiment results, the tutorial comments Note that even though the validation metric is better than the tree solution, the testing metric is much worse.

I don't understand where this comment comes from, since DNN outperforms GBT in both validation AUC and testing AUC.

Change pypi package requirement to tensorflow>=

Hello,

when I install tensorflow_lattice (_gpu), it triggers a reinstall of TensorFlow, although a newer version of TensorFlow is installed. I haven't performed extensive testing, however I can usually just uninstall that version of TF and reinstall a newer one. Is this hard version requirement necessary for compatibility?
Relaxing it would make life a little easier ;)

Thank you for your great work!! :)

monotonic calibration not working.

Trying to train a calibration for some signals to incorporate into tf ranking.

The relevant code for the calibration is

` num_keypoints = 26
kp_inits = tfl.uniform_keypoints_for_signal(
num_keypoints=num_keypoints,
input_min=0.0,
input_max=1.0,
output_min=0.0,
output_max=1.0)

# Define input layer.
# First we just take first two features and combine them linearly.
# Then we combine the output of this with the third feature.
    
picked_input = [
  tf.layers.flatten(group_features[name])
    for name in ['36', '32', '35', '33', '38', '39']
  ]

input_layer = tf.concat(picked_input, 1)
cur_layer = tfl.calibration_layer(
   input_layer,
   num_keypoints=num_keypoints,
   keypoints_initializers=kp_inits,
   bound=True,
   monotonic=[1 for _ in range(6)],
   name="calibration")

logits = tf.layers.dense(cur_layer[0], units=1, name='linear_layer', activation="elu")

`

The learned model isnt monotonic, here are some of the calibration it has learnt
tensor_name: group_score/pwl_calibration/signal_2_bound_max 1.0 tensor_name: group_score/pwl_calibration/signal_2_bound_min 0.0 tensor_name: group_score/pwl_calibration/signal_2_keypoints_inputs [0. 0.04 0.08 0.12 0.16 0.19999999 0.24 0.28 0.32 0.35999998 0.39999998 0.44 0.48 0.52 0.56 0.59999996 0.64 0.68 0.71999997 0.76 0.79999995 0.84 0.88 0.91999996 0.96 1. ] tensor_name: group_score/pwl_calibration/signal_2_keypoints_inputs/Adagrad [0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1] tensor_name: group_score/pwl_calibration/signal_2_keypoints_outputs [ 0.5595347 0.00848915 -0.02862659 0.44848698 0.3586025 0.40749145 0.35288998 0.38407487 0.38621387 0.47819927 0.6856117 0.60562074 0.59473854 0.5449814 0.43999994 0.61086124 0.72133946 0.64237064 0.66826046 0.7117335 0.6590987 0.662649 0.5869861 0.87017834 0.7034538 1.2272371 ] tensor_name: group_score/pwl_calibration/signal_2_keypoints_outputs/Adagrad [4.567583 0.34649372 0.2375099 0.2630496 0.22509426 0.19528154 0.1826403 0.19447225 0.1917207 0.21152268 0.17799918 0.18089467 0.2096777 0.18614963 0.17668937 0.1913786 0.23144016 0.23107207 0.2278506 0.21568052 0.26991028 0.24701497 0.287972 0.36811396 0.62489855 2.2491465 ]

The bounds arent respected either.

convex by pieces function

Hi,

I wonder if you have a functionality to specify that the target function should be convex by pieces, and/or monotonic by pieces.

Thanks for writing this amazing piece of software :)
Matias

How to use multi CPU easily?

It is so great to see such a good package. However the speed is too slow.

I am using Crystal ensemble model config. tfl.estimators.CannedRegressor estimator. It seems only one CPU is using, though I have 48 CPUs on the machine.

I have set the dataset with multiple threads:

feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
    x=train_xs.loc[feature_analysis_index].copy(), 
    y=train_ys.loc[feature_analysis_index].copy(), 
    batch_size=128, 
    num_epochs=1, 
    shuffle=True, 
    queue_capacity=1000,
    num_threads=40)

prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
    x=train_xs.loc[prefitting_index].copy(), 
    y=train_ys.loc[prefitting_index].copy(), 
    batch_size=128, 
    num_epochs=1, 
    shuffle=True, 
    queue_capacity=1000,
    num_threads=40)

train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
    x=train_xs.loc[train_index].copy(), 
    y=train_ys.loc[train_index].copy(), 
    batch_size=128, 
    num_epochs=100, 
    shuffle=True, 
    queue_capacity=1000,
    num_threads=40)

The usage of CPU is still only 1.25 CPU. Any suggestion?

unable to import tensorflow-lattice in Windows 7

Hi,
I have installed the package through Anaconda and then I've tried to import it through Spyder. This is the error i receive:

import tensorflow_lattice
Traceback (most recent call last):

  File "<ipython-input-21-fdc786e468a5>", line 1, in <module>
    import tensorflow_lattice

  File "C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_lattice\__init__.py", line 33, in <module>
    from tensorflow_lattice.python.estimators.calibrated import input_calibration_layer_from_hparams

  File "C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_lattice\python\estimators\calibrated.py", line 23, in <module>
    from tensorflow_lattice.python.lib import pwl_calibration_layers

  File "C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_lattice\python\lib\pwl_calibration_layers.py", line 31, in <module>
    from tensorflow_lattice.python.ops import pwl_calibration_ops

  File "C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_lattice\python\ops\pwl_calibration_ops.py", line 42, in <module>
    '../../cc/ops/_pwl_calibration_ops.so'))

  File "C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\load_library.py", line 64, in load_op_library
    # that are no longer needed.

NotFoundError: C:\Users\lubao\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_lattice\python\ops\..\..\cc\ops\_pwl_calibration_ops.so not found

Any help is greatly appreciated
Thanks,
Luba

Error in running the example of lattice models

I was running the uci_census.py file, with the create_calibrated_lattice function.
When parameter lattice_size is set to 2, the program can run successfully.
However, when the parameter is set to 3 (also 4 or other values, which I have not tested yet), the program will crash with the following error:

2018-06-17 19:54:25.814852: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
Traceback (most recent call last):
  File "uci_census.py", line 616, in <module>
    run()
  File "uci_census.py", line 609, in run
    main(argv)
  File "uci_census.py", line 586, in main
    train(estimator)
  File "uci_census.py", line 550, in train
    batch_size=FLAGS.batch_size, num_epochs=epochs, shuffle=True))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 812, in _train_model
    log_step_count_steps=self._config.log_step_count_steps) as mon_sess:
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 380, in MonitoredTrainingSession
    stop_grace_period_secs=stop_grace_period_secs)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 787, in __init__
    stop_grace_period_secs=stop_grace_period_secs)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 511, in __init__
    self._sess = _RecoverableSession(self._coordinated_creator)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 972, in __init__
    _WrappedSession.__init__(self, self._create_session())
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 977, in _create_session
    return self._sess_creator.create_session()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 668, in create_session
    self.tf_sess = self._session_creator.create_session()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 440, in create_session
    init_fn=self._scaffold.init_fn)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/session_manager.py", line 273, in prepare_session
    config=config)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/session_manager.py", line 205, in _restore_checkpoint
    saver.restore(sess, ckpt.model_checkpoint_path)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1686, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1128, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192]
	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](calibrated_tf_lattice_model/lattice/calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters/Adam_1, save/RestoreV2_3)]]

Caused by op u'save/Assign_3', defined at:
  File "uci_census.py", line 616, in <module>
    run()
  File "uci_census.py", line 609, in run
    main(argv)
  File "uci_census.py", line 586, in main
    train(estimator)
  File "uci_census.py", line 550, in train
    batch_size=FLAGS.batch_size, num_epochs=epochs, shuffle=True))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 812, in _train_model
    log_step_count_steps=self._config.log_step_count_steps) as mon_sess:
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 380, in MonitoredTrainingSession
    stop_grace_period_secs=stop_grace_period_secs)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 787, in __init__
    stop_grace_period_secs=stop_grace_period_secs)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 511, in __init__
    self._sess = _RecoverableSession(self._coordinated_creator)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 972, in __init__
    _WrappedSession.__init__(self, self._create_session())
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 977, in _create_session
    return self._sess_creator.create_session()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 668, in create_session
    self.tf_sess = self._session_creator.create_session()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 431, in create_session
    self._scaffold.finalize()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 212, in finalize
    self._saver.build()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1248, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1284, in _build
    build_save=build_save, build_restore=build_restore)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 759, in _build_internal
    restore_sequentially, reshape)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 471, in _AddShardedRestoreOps
    name="restore_shard"))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 440, in _AddRestoreOps
    assign_ops.append(saveable.restore(tensors, shapes))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 160, in restore
    self.op.get_shape().is_fully_defined())
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 276, in assign
    validate_shape=validate_shape)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 59, in assign
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192]
	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](calibrated_tf_lattice_model/lattice/calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters/Adam_1, save/RestoreV2_3)]]

IMO, the point should be this line: Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192], in which 1594323 = 3^13 and 8192 = 2^13.
Here 13 is the number of features used in this example, and 3 is the lattice_size we defined.
Could anyone help me with this?

_pwl_calibration_ops.so image not found

I just installed tensorflow-lattice on a MacOS but got the following importing error. Do you know what's happening here?

Python 3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

import tensorflow_lattice
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "", line 1, in
File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/init.py", line 33, in
from tensorflow_lattice.python.estimators.calibrated import input_calibration_layer_from_hparams
File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/estimators/calibrated.py", line 28, in
from tensorflow_lattice.python.lib import pwl_calibration_layers
File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/lib/pwl_calibration_layers.py", line 36, in
from tensorflow_lattice.python.ops import pwl_calibration_ops
File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/ops/pwl_calibration_ops.py", line 45, in
'../../cc/ops/_pwl_calibration_ops.so'))
File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/ops/../../cc/ops/_pwl_calibration_ops.so, 6): image not found

Crossed column support in TF Lattice

Hi, I am relatively new to tensorflow lattice and find it really powerful. I have couple of points to which I could not get any answer to.

  • Does TF lattice input support crossed feature column or any timeline when that would be possible?

  • Does TF lattice support a trained model as a keras layer input?

Thanks in advance.

Cannot build with tensorflow installed by Conda

Hi, would like to confirm if the current tensorflow_lattice is compatible with tensorflow installed with conda?

Reproduce:

NAME="CentOS Linux"
VERSION="7 (Core)"

conda create -n test python=3.6
conda activate test
conda install tensorflow=1.11
pip install tensorflow-lattice

Then in the Python env:

import tensorflow
import tensorflow_lattice

while tensorflow can be imported without errors, tensorflow_lattice will throw following errors:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow_lattice/__init__.py", line 33, in <module>
    from tensorflow_lattice.python.estimators.calibrated import input_calibration_layer_from_hparams
  File "/usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow_lattice/python/estimators/calibrated.py", line 24, in <module>
    from tensorflow_lattice.python.lib import pwl_calibration_layers
  File "/usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow_lattice/python/lib/pwl_calibration_layers.py", line 31, in <module>
    from tensorflow_lattice.python.ops import pwl_calibration_ops
  File "/usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow_lattice/python/ops/pwl_calibration_ops.py", line 42, in <module>
    '../../cc/ops/_pwl_calibration_ops.so'))
  File "/usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 56, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/anaconda3/envs/error/lib/python3.6/site-packages/tensorflow_lattice/python/ops/../../cc/ops/_pwl_calibration_ops.so: undefined symbol: _ZN10tensorflow8internal21CheckOpMessageBuilder9NewStringEv

In the same env, it works fine if I install tensorflow from pip instead of `conda.

Feature Request - Is there a way to enforce an S-shape constraint ?

First off - Thank you so much for open sourcing Tensorflow lattice! It is great to make use of lattice interpolation to enforce predicate domain knowledge concerning monotonicity and convexity. Looking through the current documentation, I see it is possible to enforce an increasing and concave graph for diminishing returns, but what if I want to enforce an S-curve (i.e. an increasing convex curve with an inflection point that then turns concave)?

Using lattice in tf serving

Currently when trying to serve a lattice model with tf serving, I run into an op that isn't supported in the serving kernel

... 2018-03-01 23:45:59.827196: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: fail. Took 429225 microseconds. 2018-03-01 23:45:59.828892: E tensorflow_serving/util/retrier.cc:38] Loading servable: {name: default version: 1519947547} failed: Not found: Op type not registered 'PwlIndexingCalibrator' in binary running on dsexperiment-prod-0fe24ce9bf2552633. Make sure the Op and Kernel are registered in the binary running in this process.

Both tf and tf-serving on the system are at version 1.5.0. Are lattice models not supported with serving yet ? If they are, could you point me to how to make it happen.

Reproducibility

Hi there, A general question here. How do we do reproducible lattice training? Do you have examples?

Many-batches predictions

Hi,

When trying to get predictions of Lattice Models on more than one batch of data at once, Errors are raised. This is a nice feature to efficiently get predictions, and is present in basic Neural Network Keras models;
find some examples in this colab.

As far as I can tell from looking at API docs + source code, this should be related to the inputs admitted by PWC layers, but I wonder if there is an easy way around.

In particular, this piece of code captures what I would like to get (and retrieves an error when calling on batched_inputs):


class LatticeModel(tf.keras.Model):
    def __init__(self, nodes=[2,2], nkeypoints=100):
        super(LatticeModel,self).__init__()
        self.combined_calibrators = tfl.layers.ParallelCombination()
        for ind,i in enumerate(range(2)):
          calibration_layer = tfl.layers.PWLCalibration(input_keypoints=np.linspace(0,1,nkeypoints),output_min=0.0, output_max=nodes[ind])
          self.combined_calibrators.append(calibration_layer)
        self.lattice = tfl.layers.Lattice(lattice_sizes=nodes,interpolation="simplex")
        
    def call(self, x):
        rescaled = self.combined_calibrators(x)
        feat = self.lattice(rescaled)
        return feat
    
#we define some input data
x1 = np.random.randn(100,1).astype(np.float32)
x2 = np.random.randn(100,1).astype(np.float32)

inputs = tf.concat([x1,x2], axis=-1)

#we initialize out model, and feed it with a batch of size 100
model = LatticeModel()
model(inputs)

### now we would like to efficiently predict the output of the lattice model on many batches of data at once (in this case 2)
batched_inputs = np.random.randn(2,100,1)
model(batched_inputs)

Thanks a lot!
Matías.

Is Crystals algorithm implemented in this repo?

I am trying to reproduce the results shown in Experiment 1 of the Fast and Flexible Monotonic Functions with Ensembles of Lattices.
The Crystals algorithm for feature selection looks very promising, is there any way that I can use it with this repo or implement it using the current infrastructure?
Thank you very much for the help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.