Coder Social home page Coder Social logo

topsbm / topsbm Goto Github PK

View Code? Open in Web Editor NEW
28.0 6.0 4.0 23.76 MB

Scikit-learn compatible Topic Modelling with Hierarchical Statistical Block Models (Gerlach, Peixoto and Altmann, 2018)

Home Page: http://topsbm.readthedocs.io

License: Other

Batchfile 14.16% Shell 12.52% Python 73.32%

topsbm's Introduction

topsbm's People

Contributors

jnothman avatar vighneshbirodkar avatar vijayr1912 avatar mechcoder avatar tomdlt avatar amueller avatar bryandeng avatar fabianp avatar arokem avatar kjacks21 avatar

Stargazers

Science而后行 avatar chainsawriot avatar  avatar Florian Hall avatar Parker J. Rule avatar VietEcon avatar  avatar Denis Gordeev avatar Roelien Timmer avatar Roberto Panerai Velloso avatar YameiTu avatar Sonam Gupta avatar Josue Tapia avatar  avatar Aliakbar Akbaritabar avatar Luis Montero avatar Rogini Runghen avatar Sam Zhang avatar MT Schmitz avatar  avatar Bor Hodošček avatar Gökçen Eraslan avatar Damian Mingle avatar Zichen Wang avatar György Orosz avatar Oleguer Sagarra avatar Martin Gerlach avatar Krishna Sangeeth avatar

Watchers

 avatar James Cloos avatar Gökçen Eraslan avatar Roberto Panerai Velloso avatar  avatar Rogini Runghen avatar

topsbm's Issues

Topics of a single document.

Can I also run topSBM on a single document?

I am not interested in how topics are distributed over different documents. I just simply want to extract the main topics per document.

I know that the pd.DataFrame(model.groups_[1]['p_tw_d'], columns=titles) gives a matrix with how much each topic correspond per document, but this topic is based on words from all the documents....

Any suggestions?

Evaluation metrics?

Is there any way I could measure how well topsbm is doing quantitatively on my data except for just eyeballing the topics returned?

Rename transformer and the module

hSBMTransformer is:

  • unconventional in its use of initial lowercase. Need uppercased camel case for Python classes
  • this isn't just hSBM. hSBM is an algorithm over graphs.

Candidate names:

  • HSBMTopicModel
  • GerlachAltmannTransformer
  • HSBMDecomposition
  • GraphTopicModel
  • BipartiteSBMTopicModel
  • BipartiteSBMDecomposition

We need to ask the client about naming.

transform to different levels of hierarchy

There are different groups (i.e. sets of topics) at different levels of the hierarchy. All of these should be available as transform targets.

By default we might transform redundantly to all levels of the hierarchy, or to the finest, and provide an option to change this (with a memoised model?).

Fix Licence

Licence currently has incorrect copyright attribution, and incorrect licence. Licence must be GPL v3. Each file with substantive work should have a copyright notice plus:

    This file is part of TopSBM.

    TopSBM is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    TopSBM is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with TopSBM.  If not, see <https://www.gnu.org/licenses/>.

Starting with `fit_transform`

I think you should initially work on defining fit_transform. I would give it the following docstring:

def fit_transform(self, X, y=None):
    """Fit the hSBM topic model

    Constructs a graph representation of X, infers clustering, and reports
    the cluster probability for each sample in X.

    Parameters
    ----------
    X : ndarray or sparse matrix of shape (n_samples, n_features)
        Word frequencies for each document, represented as  non-negative
        integers.
    y : ignored

    Returns
    -------
    Xt : ndarray of shape (n_samples, n_components)
    """

This would do basically everything for the simple case of just getting a topic decomposition of some X. It would call private methods as appropriate to modularise the work. The main idea would be to encapsulate the logic for transformation as a function of numeric array X and any hyper-parameters provided in __init__.

An initial test would be:

import pytest
from sklearn.datasets import make_multilabel_classification
from hSBM import hSBMTransformer

@pytest.mark.parametrize('sparse', [False, True])
@pytest.mark.parametrize('n_components', [5, 10])
def test_basic_fit_transform(sparse, n_components):
    X, y = make_multilabel_classification(random_state=0, n_features=150)
    est = hSBMTransformer(n_components=n_components)
    Xt = est.fit_transform(X)
    assert Xt.shape == (X.shape[0], n_components)

def test_continuous_unacceptable():
    # hSBMTransformer should refuse to transform anything but integer data
    est = hSBMTransformer()
    with pytest.raises(ValueError):
        est.fit_transform(np.linspace(0, 1, 1000).reshape(20, 5))

Once this is done, we can further work out what we need to do to make the estimator more useful.

Make sure random state can be set

We need a parameter random_state that controls randomization with a numpy RandomState. This requires investigating how graph_tool performs random number generation or allows this to be controlled.

Consider a mode with non-hierarchical inference

Quoting Eduardo

Control over the hierarchy:

The non-hierarchical model we considered in our paper is obtained
calling gt.minimize_blockmodel_dl instead of
gt.minimize_nested_blockmodel_dl.

Instead of a set of solutions in different levels of the hierarchy, we
only obtain one solution. We would have to adapt the 'get_groups'
function in the following way:
- now, the solution of the hierarchical model (state) is projected onto
a given level in the hierachy (state_l)
- if we use the non-hierarchical model, we would skip the projection-part
We could also set the upper and lower limit for the total number of groups.
The advantage of this would be to have more control over the number of
groups and to not have to decide about which level in the hierarchy.
However, the main advantage of the hierarchical model is that it
provides a better prior increasing the resolution of small clusters
(keyword: 'resolution limit' in community detection).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.