This is the source code repo for the ICLR paper LambdaNet: Probabilistic Type Inference using Graph Neural Networks. For an overview of how LambdaNet works, see our video from ICLR 2020.
After cloning this repo, here are the steps to reproduce our experimental results:
- Install all the dependencies (Java, sbt, Typescript, etc.) See the "Using Docker" section below.
- To run pre-trained model
- download the model weights using this link (predicts user defined type) or this link (only library types), unzip the file, and put the
models
file under the project root. - To run the model in interative mode, which outputs
(source code position, predicted type)
pairs for the specified files:- Under project root, run
sbt "runMain lambdanet.TypeInferenceService"
. - After it finishes loading the model into memory, simply enter the directory path containing Typescript files.
- Note that currently, LambdaNet only works with Typescript files, so if you want to run it on Javascript files, you will need to change the file extensions to
.ts
.
- Under project root, run
- Alternatively, to run the model in batched mode, which outputs human-readable HTML files and accuracy statistics:
- download the parsedRepos file, unzip the file and put the directory under
<project root>/data
. - Check the file
src/main/scala/lambdanet/RunTrainedModel.scala
and change the parameters under the todo comments depending on which model you want to run and where your test TypeScript files are located. - Under project root, use
sbt runTrained
to run the model.
- download the parsedRepos file, unzip the file and put the directory under
- download the model weights using this link (predicts user defined type) or this link (only library types), unzip the file, and put the
- To train LambdaNet from scratch
- Download the Typescript projects used in our experiments.
- Filter and prepare the TS projects into a serialization format.
- start the training.
The Typescript files used for manual comparison with JSNice are put under the directory data/comparison/
.
We also provide a Docker file to automatically download and install all the dependencies. Here are the steps to run pre-trained LambdaNet model inside a Docker Container:
-
First, make sure you have installed Docker.
-
Put pre-trained model weights under
models/
. -
Under project root, run
docker build -t lambdanet:v1 . && docker run --name lambdanet --memory 14g -t -i lambdanet:v1
. (Make sure the machine you are using has enough memory for thedocker run
command.) -
After the Docker container has successfully started, run
sbt runTrained
, and you should see LambdaNet outputs "libAccuracy" and "projectAccuracy" after a few minutes. LambdaNet also stores its predictions into an Html file under<test TS project>/predictions/
(<test TS project>
is currently default todata/ts-algorithms
, but you can change this insrc/main/scala/lambdanet/RunTrainedModel.scala
.)