Coder Social home page Coder Social logo

siddk / entity-network Goto Github PK

View Code? Open in Web Editor NEW
57.0 6.0 16.0 19.49 MB

Tensorflow implementation of "Tracking the World State with Recurrent Entity Networks" [https://arxiv.org/abs/1612.03969] by Henaff, Weston, Szlam, Bordes, and LeCun.

Python 100.00%
recurrent-entity-networks embeddings key-vectors tensorflow tensorflow-models recurrent-neural-networks

entity-network's Introduction

Recurrent Entity Networks

Tensorflow/TFLearn Implementation of "Tracking the World State with Recurrent Entity Networks" by Henaff et. al.

Punchline

By building a set of disparate memory cells, each responsible for different concepts, entities, or other content, Recurrent Entity Networks (EntNets) are able to efficiently and robustly maintain a “world-state” - one that can be updated easily and effectively with the influx of new information.

Furthermore, one can either let EntNet cell keys vary, or specifically seed them with specific embeddings, thereby forcing the model to track a given set of entities/objects/locations, allowing for the easy interpretation of the underlying decision-making process.

Results

Implementation results are as follows (graphs of training/validation loss will be added later). Some of the tasks are fairly computationally intensive, so it might take a while to get benchmark results.

Note that for efficiency, training stopped after validation accuracy passed a threshold of 95%. This is different than the method used in the paper, which runs tasks for 200 epochs, and reports the best model across 10 different runs. The number of runs, epochs to converge, and final train/validation/test accuracies (best on validation over different runs) for this repository relative to the paper results are as follows:

Note that the italics above indicate examples of overfitting. Note that the notes rows consist of single runs of the model - this is probably why multiple runs are necessary. If this continues to happen, I'll look into ways to better regularize the network (via dropout, for example).

The bold above denotes failure to convergence. I'm not sure why this is happening, but I'll note that Jim Fleming reports the same sort of issue in his implementation.

Additionally, plots of the training/validation loss and accuracies through training can be found in eval/qa_id, where id is the id of the task at hand. As an example, here is the plot for the graph of Task 1 - Single Supporting Fact's training:

alt text

Components

Entity Networks consist of three separate components:

  1. An Input Encoder, that takes the input sequence at a given time step, and encodes it into a fixed-size vector representation

  2. The Dynamic Memory (the core of the model), that keeps a disparate set of memory cells, each with a different vector key (the location), and a hidden state memory (the content)

  3. The Output Module, that takes the hidden states, and applies a series of transformations to generate the output .

A breakdown of the components are as follows:

Input Encoder: Takes the input from the environment (i.e. a sentence from a story), and maps it to a fixed size state vector .

This repository (like the paper) utilizes a learned multiplicative mask, where each embedding of the sentence is multiplied element-wise with a mask vector and then summed together.

Alternatively, one could just as easily imagine an LSTM or CNN encoder to generate this initial input.

Dynamic Memory: Core of the model, consists of a series of key vectors and memory (hidden state) vectors .

The keys and state vectors function similarly to how the program keys and program embeddings function in the NPI/NTM - the keys represent location, while the memories are content. Only the content (memories) get updated at inference time, with the influx of new information.

Furthermore, one can seed and fix the key vectors such that they reflect certain words/entities => the paper does this by fixing key vectors to certain word embeddings, and using a simple BoW state encoding. This repository currently only supports random key vector seeds.

The Dynamic Memory updates given an input are as follows - this is very similar to the GRU update equations:

  • - Gating function, determines how much memory j should be affected by the given input.
  • - New state update - U, V, W are model parameters that are shared across all memory cells . - Model can be simplified by constraining U, V, W to be zero, or identity.
  • - Gated update, elementwise product of g with $\tilde{h}$. - Dictates how much the given memory should be updated.

Output Module: Model interface, takes in the memories and a query vector q, and transforms them into the required output.

Functions like a 1-hop Memory Network (Sukhbaatar, Weston), building a weighting mechanism over each input, then combines and feeds them through some intermediate layers.

The actual updates are as follows:

  • - Normalizes states based on cosine similarity.
  • - Weighted sum of hidden states
  • - R, H are trainable model parameters. - As long as you can build some sort of loss using y, then the entirety of the model is trainable via Backpropagation-Through-Time (BPTT).

Repository Structure

Directory is structured in the following way:

  • model/ - Model definition code, including the definition of the Dynamic Memory Cell.

  • preprocessor/ - Preprocessing code to load and vectorize the bAbI Tasks.

  • tasks/ - Raw bAbI Task files.

  • run.py - Core script for training and evaluating the Recurrent Entity Network.

References

Big shout-out to Jim Fleming for his initial Tensorflow Implementation - his Dynamic Memory Cell Implementation specifically made things a lot easier.

Reference: Jim Fleming's EntNet Memory Cell

entity-network's People

Contributors

siddk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

entity-network's Issues

A portion of the data will never be trained/evaluated

There is a small issue, but I think it won't change the reported results.

In lines 80, and 129, you do:
for start, end in zip(range(0, n, bsz), range(bsz, n, bsz)):

If n is not a multiple of batch size, a small portion of the data in the end will never be trained/processed.
I think the correct would be something like:

n_batches = n / bsz + (1 if n % bsz != 0 else 0)
for step in range(n_batches):
    start = step * bsz
    end = min((step + 1) * bsz, n)

generalization ability

it seems that there is no reguarization for the model in this implementation. and it may be lack of generalization ability.
will this be a issue, or should reguarization to added( e.g. L2 reguarization)?

TypeError from reader.py

File "C:\AI\Code- EntNet\siddk\preprocessor\reader.py", line 23, in parse
vectorized_data.append(pickle.load(f))

TypeError: a bytes-like object is required, not 'str'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.