Coder Social home page Coder Social logo

abtextsumm's Introduction

AbTextSumm

Abstractive Summarization: Code of the ILP-based algorithm as described in the IJCAI paper

Please note that this code only tackles the summarization component and not the clustering part. The code takes a list of sentences, or a paragraph and produces an extractive or abstractive summary driven by the parameter "mode".

For language model (only required for abstractive summarization): Needs kenlm: https://kheafield.com/code/kenlm/ [See how to install] Use any available ARPA format language model and convert to kenlm format as binary. KENLM is really fast.

Other several packages required: PuLP for optimization, sklearn, nltk, cpattern, igraph Best option is to use Anaconda package. All the above mentioned packages can be installed using pip. To install dependencies, use:

pip install - r requirements.txt

in the root folder of the project.

A major part of the word graph generation code has been taken from https://github.com/boudinfl/takahe.

The main program is in txtsumm/Example.py. Given a passage, it can generate a summary using the following code:

  list_Sentences=segmentize(passage)
  generateSummaries(list_Sentences, mode="Extractive")

Changing the mode = "Extractive" to:

mode="Abstractive"

will run Abstractive summarization with TextRank as the default ranking parameter. However, it requires a language model described earlier. By default, this code runs abstractive summarization. You can also use the length parameter (in words) to control length of the output summary. For example:

generateSummaries(list_Sentences, mode="Extractive", length=50)

If you use the code here, please cite this paper:

Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. "Multi-Document Abstractive Summarization Using ILP based Multi-Sentence Compression." Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), Buenos Aires, Argentina. 2015.

abtextsumm's People

Contributors

siddbanpsu avatar stevenlol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

abtextsumm's Issues

Remove .pyc

These files should be added to your .gitignore

Docstrings missing in WGGraph.py

Docstrings missing from functions in WGGraph.py. in particular, the code threw an error running with multi_news after processing a few thousand docs. It has something to do with the function getredundantComponents. But as there is no docstring I don't immediately know what this function is meant to do...Not only do I not know what the function is meant to do, there is also a parameter in there window_size = 4. What does this do and what happens if i change it. Sorry to be difficult but I really want to get your code working nicely.

Not "production" ready, plus a bug.

Thanks for your code. I think it's a good start, but there is a lot of tidying up needed to make this "production" ready. I seem to be able to cut out a huge amount of code and this doesn't effect running the example, I've seen #TODO's that arn't clear (i.e. FIXME len(s) > 1: SUCH A SHAME!!!) as well as other comments (i.e. #NOT USING THIS NOW: THIS is for IGRAPH). There is also a bug, which is what this issue is really about. You don't preprocess your stop words in the same manner as your documents. This throws a warning when running the code. I fixed it by adding in a preprocessing step to your stop words before passing them to StemmedTfidfVectorizer.

`
preprocessed_stop_words=[]
tf = TfidfVectorizer()
preprocess = tf.build_preprocessor()
tokenize = tf.build_tokenizer()

for w in stopwords:
    p = preprocess(w)
    tokens = tokenize(p)
    preprocessed_stop_words.append(tokens)

flat_preprocessed_stop_words = [item for sublist in preprocessed_stop_words for item in sublist]

bow_matrix = StemmedTfidfVectorizer(stop_words=flat_preprocessed_stop_words).fit_transform(docs)

`

As this code looks almost ready it would be a shame not to polish it up. Also why not throw in the sentence clustering part also since it's a major aspect of your paper? It would be great if it was possible to reproduce the results of your paper with ease. Or perhaps to also substitute the dataset more easily for a different dataset i.e. multi_news (which is something i'm working on).

lm-3g.klm is missing

Please noticed that file lm-3g.klm is missing if you want to use "Abstractive" method.

ROUGE SU4

Please provide the link for calculating ROUGE-SU4 score

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.