Coder Social home page Coder Social logo

ibm / use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights Goto Github PK

View Code? Open in Web Editor NEW
13.0 9.0 4.0 47.23 MB

Use Advanced NLP and Tone Analyser to extract textual insights

License: Apache License 2.0

Python 0.40% CSS 31.02% JavaScript 38.76% HTML 29.82%
natural-language-processing cloud-object-storage series ibm-cloud ibm-code-pattern data-and-ai watson-tone-analyzer

use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights's Introduction

Use advanced NLP and tone analysis to extract meaningful insights

This Code Pattern is the third part of the series Extracting Textual Insights from Videos with IBM Watson. Please complete the Extract audio from video and Build custom Speech to Text model with speaker diarization capabilities code patterns of the series before continuing further since all the three code patterns are linked.

Natural Language Understanding includes a set of text analytics features that can be used to extract meanings from unstructured data such as a text file. Tone Analyzer on the other hand understand emotions and communication styles in a text. We combine the capabilities of both the services to extract meaningful insights in the form of NLU Analysis Report from a natural language transcript generated by transcribing IBM earnings call Q1 2019 meeting video recording. The report will consists of sentiment analysis of the meeting, top positive sentences spoken in the meeting and word clouds based on keywords, using Python Flask runtime.

In this code pattern, given a text file, we learn how to extract keywords, emotions, sentiments, positive sentences, and much more using Watson Natural Language Understanding and Tone Analyzer.

When you have completed this code pattern, you will understand how to:

  • Use advanced NLP to analyze text and extract meta-data from content such as concepts, entities, keywords, categories, sentiment and emotion.
  • Leverage Tone Analyzer's cognitive linguistic analysis to identify a variety of tones at both the sentence and document level.
  • Connect applications directly to Cloud Object Storage.

architecture

Flow

  1. The transcribed text from the previous code pattern of the series is retrived from Cloud Object Storage

  2. Watson Natural Language Understanding and Watson Tone Analyzer are used to extract insights from the text.

  3. The response from Natural Language Understanding and Watson Tone Analyzer is analyzed by the application and a report is generated.

  4. User can download the report which consists of the textual insights.

Watch the Video

video

Pre-requisites

  1. IBM Cloud Account

  2. Docker

  3. Python

Steps

  1. Clone the repo

  2. Create Watson Services

  3. Add the Credentials to the Application

  4. Deploy the Application

  5. Run the Application

1. Clone the repo

Clone the use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights repo locally. In a terminal, run:

$ git clone https://github.com/IBM/use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights

We will be using the following datasets from the Cloud Object Storage:

Note: These files were uploaded to Cloud Object Storage in previous code pattern of the series.

  1. earnings-call-test-data.txt - To extract Category, Concept Tags, Entity, Keywords, sentiments, emotions, positive sentences, and Wordcloud.

  2. earnings-call-Q-and-A.txt - To extract Category, Concept Tags, Entity, Keywords, sentiments, emotions, positive sentences, and Wordcloud.

2. Create Watson Services

2.1 Create Natural Language Understanding Service

nlu-service

  • In Natural Language Understanding dashboard, click on Services Credentials

  • Click on New credential and add a service credential as shown. Once the credential is created, you can copy the credentials using the small two overlapping squares and save the credentials in a text file for using it in later steps in this code pattern.

2.2 Create Tone Analyzer Service

  • On IBM Cloud, create a Tone Analyzer service, under Select a pricing plan select Lite and click on create as shown.

tone-service

  • In Tone Analyzer dashboard, click on Services Credentials

  • Click on New credential and add a service credential as shown. Once the credential is created, you can copy the credentials using the small two overlapping squares and save the credentials in a text file for using it in later steps in this code pattern.

3. Add the Credentials to the Application

  • In the first code pattern of the series cloned repo, you will have updated credentials.json file with cloud object storage credentials. Copy that file and paste it in parent folder of the repo that you cloned in step 1.

  • In the repo parent folder, open the naturallanguageunderstanding.json file and paste the credentials copied in step 2.1 and save the file.

  • Similarly, in the repo parent folder, open the toneanalyzer.json file and paste the credentials copied in step 2.2 and save the file.

4. Deploy the Application

With Docker Installed
  • Build the Dockerfile as follows :
$ docker image build -t use-advanced-nlp-to-extract-insights .
  • once the dockerfile is built run the dockerfile as follows :
$ docker run -p 8080:8080 use-advanced-nlp-to-extract-insights
Without Docker
  • Install the python libraries as follows:

    • change directory to repo parent folder
    $ cd use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights/
    • use python pip to install the libraries
    $ pip install -r requirements.txt
  • Finally run the application as follows:

$ python app.py

5. Run the Application

sample_output

We Extract Category, Concept Tags, Entity, Keywords, Sentiments, Emotions, Top 5 Positive Sentences and Word Cloud from the text in just 2 steps:

  1. Click on earnings-call-test-data.txt as the text file to extract insights.

Note: These files are present in Cloud Object Storage and were uploaded to it in previous code pattern of the series.

  1. Select the entities that you want to extract from the text and click on Analyze button as shown. Now the selected entities will be extracted giving an analysis report.

Note: It should take about 2min to analyze the text, please be patient.

step1

  • More about the entities:
    • Category - Categorize your content using a five-level classification hierarchy. View the complete list of categories here.
    • Concept Tags: Identify high-level concepts that aren't necessarily directly referenced in the text.
    • Entity: Find people, places, events, and other types of entities mentioned in your content. View the complete list of entity types and subtypes here.
    • Keywords: Search your content for relevant keywords.
    • Sentiments: Analyze the sentiment toward specific target phrases and the sentiment of the document as a whole.
    • Emotions: Analyze emotion conveyed by specific target phrases or by the document as a whole.
    • Positive sentences: The Watson Tone Analyzer service uses linguistic analysis to detect emotional and language tones in written text
  • Learn more features of:
    • Watson Natural Language Understanding service. Learn more.
    • Watson Tone Analyzer service. Learn more.
  • Once the NLU Analysis Report is generated you can review it. The Report consists of:

    • Features extracted by Watson Natural Language Understanding

    • Features extracted by Watson Tone Analyzer:

    • Other features

  1. Category: As we have used the IBM earnings call Q1 2019 meeting recording dataset, you can see that the category was extracted as finance specifically financial news.

Note : You can see the confidence score of the model in green bubble tags.

  1. Entity: As you can see entity is Company specifically IBM indicating that, in the video recording most of the emphisis was on the Company, IBM.

  2. Concept Tags: Top 3 concept tags are extracted from the video, Cloud computing, Revenue and Income statement indicating that the speaker spoke about these contexts more often.

  3. Keywords, Sentiments and Emotions: Top keywords along with their sentiments and emotions are extracted, giving a sentiment analysis of the entire meeting.

  4. Top Positive Sentences: Based on emotional tone and language tone, positive sentences spoken in the video is extracted and is limited to 5 top positive sentences.

  5. Word Clouds: Based on the keywords, Nouns & Adjectives as well as Verbs are analyzed, and the result is then turned into word clouds.

  • The Report can be printed by clicking on the print button as shown.

step2

Summary

We have seen how to extract meaningful insights from the transcribed text files. In the next code pattern of the series we will learn how these three code patterns from the series can be plugged together so that uploading any video will extract audio, transcribe the audio and extract meaningfull insights all in one application.

License

This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.

Apache License FAQ

use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights's People

Contributors

imgbotapp avatar manojjahgirdar avatar stevemar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

use-advanced-nlp-and-tone-analyser-to-analyse-speaker-insights's Issues

Docker run problem : SyntaxError: invalid syntax

convert-video-to-audio git:(master) ✗ docker --version
Docker version 20.10.7, build f0df350

docker image build -t convert-video-to-audio .
(build ran successfully)

➜ convert-video-to-audio git:(master) ✗ docker run -p 8080:8080 convert-video-to-audio
Traceback (most recent call last):
File "app.py", line 1, in
from flask import Flask, render_template, request, redirect, jsonify
File "/usr/local/lib/python3.5/dist-packages/flask/init.py", line 14, in
from jinja2 import escape
File "/usr/local/lib/python3.5/dist-packages/jinja2/init.py", line 5, in
from .bccache import BytecodeCache as BytecodeCache
File "/usr/local/lib/python3.5/dist-packages/jinja2/bccache.py", line 61
self.code: t.Optional[CodeType] = None
^
SyntaxError: invalid syntax

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.