Complete Guide to Topic Modeling

What is Topic Modeling?

Topic modelling, in the context of Natural Language Processing, is described as a method of uncovering hidden structure in a collection of texts. Although that is indeed true it is also a pretty useless definition. Let’s define topic modeling in more practical terms.

Definitions:

  • C: collection of documents containing N texts.
  • V: vocabulary (the set of unique words in the collection)
Dimensionality Reduction
Topic modeling is a form of dimensionality reduction. Rather than representing a text T in its feature space as {Word_i: count(Word_i, T) for Word_i in V}, we can represent the text in its topic space as {Topic_i: weight(Topic_i, T) for Topic_i in Topics}. Notice that we’re using Topics to represent the set of all topics.
Unsupervised Learning
Topic modeling can be easily compared to clustering. As in the case of clustering, the number of topics, like the number of clusters, is a hyperparameter. By doing topic modeling we build clusters of words rather than clusters of texts. A text is thus a mixture of all the topics, each having a certain weight.
A Form of Tagging
If document classification is assigning a single category to a text, topic modeling is assigning multiple tags to a text. A human expert can label the resulting topics with human-readable labels and use different heuristics to convert the weighted topics to a set of tags.

Why is Topic Modeling useful?

There are several scenarios when topic modeling can prove useful. Here are some of them:

  • Text classification – Topic modeling can improve classification by grouping similar words together in topics rather than using each word as a feature
  • Recommender Systems – Using a similarity measure we can build recommender systems. If our system would recommend articles for readers, it will recommend articles with a topic structure similar to the articles the user has already read.
  • Uncovering Themes in Texts – Useful for detecting trends in online publications for example

Topic Modeling Algorithms

There are several algorithms for doing topic modeling. The most popular ones include

  • LDA – Latent Dirichlet Allocation – The one we’ll be focusing in this tutorial. Its foundations are Probabilistic Graphical Models
  • LSA or LSI – Latent Semantic Analysis or Latent Semantic Indexing – Uses Singular Value Decomposition (SVD) on the Document-Term Matrix. Based on Linear Algebra
  • NMF – Non-Negative Matrix Factorization – Based on Linear Algebra

Here are some things all these algorithms have in common:

  • The number of topics (n_topics) as a parameter. None of the algorithms can infer the number of topics in the document collection.
  • All of the algorithms have as input the Document-Word Matrix (or Document-Term Matrix). DWM[i][j] = The number of occurrences of word_j in document_i
  • All of them output 2 matrices: WTM (Word Topic Matrix) and TDM (Topic Document Matrix). The matrices are significantly smaller and the result of their multiplication should be as close as possible to the original DWM matrix.

The purpose of this guide is not to describe in great detail each algorithm, but rather a practical overview and concrete implementations in Python using Scikit-Learn and Gensim. We’ll go over every algorithm to understand them better later in this tutorial. Next, we’re going to use Scikit-Learn and Gensim to perform topic modeling on a corpus.

Using Gensim for Topic Modeling

We’re going to first study the gensim implementations because they offer more functionality out of the box and then we’ll replicate that functionality with sklearn. Let’s first prepare the dataset we’ll be working with.

Gensim doesn’t have an implementation for NMF so we’re only going to play with LDA and LSI (Latent Semantic Indexing AKA Latent Semantic Analysis) models.

Let’s now display the topics the two models have inferred:

Let’s now put the models to work and transform unseen documents to their topic distribution:

The LDA result can be interpreted as a distribution over topics. Let’s take an example:
[(0, 0.020229582), (1, 0.48642197), (2, 0.020894188), (3, 0.020058075), (4, 0.022410348), (5, 0.025939714), (6, 0.20046122), (7, 0.13457063), (8, 0.048185956), (9, 0.02082831)]. This result suggests that topic 1 has the strongest representation in this text.

Gensim offers a simple way of performing similarity queries using topic models.

Using Scikit-Learn for Topic Modeling

Let’s now go through the same process with sklearn. This librabry offers a NMF implementation as well. The algorithms are more bare-bones than what we’ve seen with gensim but on the plus side, they implement the fit/transform interface we’re used with:

In order to inspect the inferred topics we need to implement a print function ourselves:

Transforming an unseen document goes like this:

Here’s how to implement the similarity functionality we’ve seen in the gensim section:

Plotting words and documents in 2D with SVD

We can use SVD with 2 components (topics) to display words and documents in 2D. The process is really similar. Let’s start with displaying documents since it’s a bit more straightforward.

In case you are running this in a Jupyter Notebook, run the following lines to init bokeh:

Let’s plot documents in 2D:

Plotting documents in 2D

You can try going through the documents to see if indeed closer documents on the plot are more similar. To display words in 2D we just need to transpose the vectorized data: words_2d = svd.fit_transform(data_vectorized.T).

Displaying words in 2D

To get a really good word representation we need a significantly larger corpus. Even with this corpus, if we zoom around a bit, we can find some meaningful representations:

Meaningful word cluster

More about Latent Dirichlet Allocation

LDA is the most popular method for doing topic modeling in real-world applications. That is because it provides accurate results, can be trained online (do not retrain every time we get new data) and can be run on multiple cores. Let’s repeat the process we did in the previous sections with sklearn and LatentDirichletAllocation:

Notice how the factors corresponding to each component (topic) add up to 1. That’s not a coincidence. Indeed, LDA considers documents as being generated by a mixture of the topics. The purpose of LDA is to compute how much of the document was generated by which topic. In this example, more than half of the document has been generated by the second topic:

LDA is an iterative algorithm. Here are the two main steps:

  • In the initialization stage, each word is assigned to a random topic.
  • Iteratively, the algorithm goes through each word and reassigns the word to a topic taking into consideration:
    • What’s the probability of the word belonging to a topic
    • What’s the probability of the document to be generated by a topic

Due to these important qualities, we can visualize LDA results easily. We’re going to use a specialized tool called PyLDAVis:

Topic Details

Let’s interpret the topic visualization. Notice how topics are shown on the left while words are on the right. Here are the main things you should consider:

  1. Larger topics are more frequent in the corpus.
  2. Topics closer together are more similar, topics further apart are less similar.
  3. When you select a topic, you can see the most representative words for the selected topic. This measure can be a combination of how frequent or how discriminant the word is. You can adjust the weight of each property using the slider.
  4. Hovering over a word will adjust the topic sizes according to how representative the word is for the topic.

As we mentioned before, LDA can be used for automatic tagging. We can go over each topic (pyLDAVis helps a lot) and attach a label to it. In the screenshot above you can see that the topic is mainly about Education. In the next example, we can see that this topic is mostly about Music. You can try doing this for all the topics. Unfortunately, not all topics are so clearly defined as the ones we looked at. Results can be improved by experimenting with different num_topics values. In this case, our corpus is not really that large, it only has 500 instances. A larger corpus will induce more clearly defined topics.

Visualising LDA topics