Coder Social home page Coder Social logo

amazon-archives / amazon-transcribe-websocket-static Goto Github PK

View Code? Open in Web Editor NEW
196.0 10.0 464.0 200 KB

A static site demonstrating real-time audio transcription via Amazon Transcribe over a WebSocket.

License: Apache License 2.0

HTML 26.94% JavaScript 69.88% CSS 3.19%
aws amazon-web-services amazon-transcribe-api audio-transcription

amazon-transcribe-websocket-static's Introduction

Amazon Transcribe Websocket Static

Try it out

A static site demonstrating real-time audio transcription via Amazon Transcribe over a WebSocket.

This demo app uses browser microphone input and client-side JavaScript to demonstrate the real-time streaming audio transcription capability of Amazon Transcribe using WebSockets.

Check out the Amazon Transcribe WebSocket docs.

Building and Deploying

amplifybutton

Even though this is a static site consisting only of HTML, CSS, and client-side JavaScript, there is a build step required. Some of the modules used were originally made for server-side code and do not work natively in the browser.

We use browserify to enable browser support for the JavaScript modules we require().

  1. Clone the repo
  2. run npm install
  3. run npm run-script build to generate dist/main.js.

Once you've bundled the JavaScript, all you need is a webserver. For example, from your project directory:

npm install --global local-web-server
ws

Credits

This project is based on code written by Karan Grover from the Amazon Transcribe team, who did the hard work (audio encoding, event stream marshalling).

License

This library is licensed under the Apache 2.0 License.

amazon-transcribe-websocket-static's People

Contributors

brandonmwest avatar ceuk avatar dependabot[bot] avatar ianjennings avatar jamesiri avatar yehudacohen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-transcribe-websocket-static's Issues

opening Web socket from backend with Nodejs

I want to open socket connection from backend and pass the binary audio to transcribe
From frontend, while opening the connection, pre-assigned URL exposes AWS secret key in the network tab, so for security concerns and requirement, I want to use the transcribe service from the backend

I am able to open the connection, and using file readstream, I am sending the audio to transcribe, but getting transcribe results Array empty. I suppose since I am reading the audio using file system, the audio format is not in the required audio format, so getting an empty result
Steps i am doing for backend -

I am recording audio, creating wav file and sending to my express server as form data
then, writing it to an audio file and opening one read stream which read chuck by chunk and when a chunk data available, I am encoding it and sending to the socket

Please let me know how to send the proper audio data and format of the audio, if i want to do it from the backend

Sample Code

const fs = require('fs');
const crypto            = require('crypto'); // tot sign our pre-signed URL
const v4                = require('./aws-signature-v4'); // to generate our pre-signed URL
const marshaller        = require("@aws-sdk/eventstream-marshaller"); // for converting binary event stream messages to and from JSON
const util_utf8_node    = require("@aws-sdk/util-utf8-node");
const audioUtils        = require('./audioUtils');  // for encoding audio data as PCM
var WebSocket = require('ws') //for opening a web socket
// our converter between binary event streams messages and JSON
const eventStreamMarshaller = new marshaller.EventStreamMarshaller(util_utf8_node.toUtf8, util_utf8_node.fromUtf8);

   // our global variables for managing state
let languageCode;
let region;
let sampleRate=44100;
let transcription = "";
let socket;
let micStream;
let socketError = false;
let transcribeException = false;

function getAudioEventMessage(buffer) {
    // wrap the audio data in a JSON envelope
    return {
        headers: {
            ':message-type': {
                type: 'string',
                value: 'event'
            },
            ':event-type': {
                type: 'string',
                value: 'AudioEvent'
            }
        },
        body: buffer
    };
}
function convertAudioToBinaryMessage(raw) {
    
    if (raw == null)
        return;

    // downsample and convert the raw audio bytes to PCM
    let downsampledBuffer = audioUtils.downsampleBuffer(raw, sampleRate);
    let pcmEncodedBuffer = audioUtils.pcmEncode(downsampledBuffer);
setTimeout(function() {}, 1);
    // add the right JSON headers and structure to the message
    let audioEventMessage = getAudioEventMessage(Buffer.from(pcmEncodedBuffer));

    //convert the JSON object + headers into a binary event stream message
    let binary = eventStreamMarshaller.marshall(audioEventMessage);

    return binary;
}
function createPresignedUrl() {
    let endpoint = "transcribestreaming." + 'us-east-1' + ".amazonaws.com:8443";

    // get a preauthenticated URL that we can use to establish our WebSocket
    return v4.createPresignedURL(
        'GET',
        endpoint,
        '/stream-transcription-websocket',
        'transcribe',
        crypto.createHash('sha256').update('', 'utf8').digest('hex'), {
            'key': <AWS_KEY>,
            'secret': <SECRET_KEY>,
            'protocol': 'wss',
            'expires': 15,
            'region':<REGION>,
            'query': "language-code=" + 'en-US' + "&media-encoding=pcm&sample-rate=" + 16000
        }
    );
}
function showError(message) {
   logger.error("Error: ",message)
}

module.exports =function (router) {
  /**
   * @description Endpoint to return speech to text
   * @returns JSON with text and transcript
   */

router.post('/stt', async function (req, res) {
    try {
      // The name of the input field (i.e. "sampleFile") is used to retrieve the uploaded file
      let rawAudioChunk = req.files.file;
       rawAudioChunk.mv('./uploads/' + rawAudioChunk.name);
      const eventStreamMarshaller = new marshaller.EventStreamMarshaller(util_utf8_node.toUtf8, util_utf8_node.fromUtf8);

    // Pre-signed URLs are a way to authenticate a request (or WebSocket connection, in this case)
    // via Query Parameters. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
    let url = createPresignedUrl();
    console.log("url",url)
    console.log("ws",new WebSocket(url))
    //open up our WebSocket connection
    socket = new WebSocket(url);
    console.log("socket",socket)
    socket.binaryType = "arraybuffer";
    let output = '';

const readStream = fs.createReadStream('./uploads/'+rawAudioChunk.name,{ highWaterMark: 32 * 256 });
readStream.setEncoding('binary')

readStream.on('end', function() {
  console.log('finished reading');
  // write to file here.
   // Send an empty frame so that Transcribe initiates a closure of the WebSocket after submitting all transcripts
        let emptyMessage = getAudioEventMessage(Buffer.from(new Buffer([])));
        let emptyBuffer = eventStreamMarshaller.marshall(emptyMessage);
        socket.send(emptyBuffer);
         fs.unlinkSync('./uploads/' + rawAudioChunk.name)
});
    // when we get audio data from the mic, send it to the WebSocket if possible
    socket.onopen = function() {
        readStream.on('data', function(chunk) {
         let binary = convertAudioToBinaryMessage(chunk);
         if (socket.OPEN)
               socket.send(binary);
              });
            // the audio stream is raw audio bytes. Transcribe expects PCM with additional metadata, encoded as binary
        }

    // handle inbound messages from Amazon Transcribe
    socket.onmessage = function (message) {
        //convert the binary event stream message to JSON
        let messageWrapper = eventStreamMarshaller.unmarshall(Buffer(message.data));
        let messageBody = JSON.parse(String.fromCharCode.apply(String, messageWrapper.body));
        console.log("results:.. ",messageBody)
        if (messageWrapper.headers[":message-type"].value === "event") {
            handleEventStreamMessage(messageBody);
        }
        else {
            transcribeException = true;
            showError(messageBody.Message);
           
    }
  }

    socket.onerror = function () {
        socketError = true;
        showError('WebSocket connection error. Try again.');
        
    };
    
    socket.onclose = function (closeEvent) {
                
        // the close event immediately follows the error event; only handle one.
        if (!socketError && !transcribeException) {
            if (closeEvent.code != 1000) {
                showError('Streaming Exception ' + closeEvent.reason);
            }
        }
    };


let handleEventStreamMessage = function (messageJson) {
    let results = messageJson.Transcript.Results;

    if (results.length > 0) {
        if (results[0].Alternatives.length > 0) {
            let transcript = results[0].Alternatives[0].Transcript;

            // fix encoding for accented characters
            transcript = decodeURIComponent(escape(transcript));

         console.log(transcript)
        }
    }
}

let closeSocket = function () {
    if (socket.OPEN) {

        // Send an empty frame so that Transcribe initiates a closure of the WebSocket after submitting all transcripts
        let emptyMessage = getAudioEventMessage(Buffer.from(new Buffer([])));
        let emptyBuffer = eventStreamMarshaller.marshall(emptyMessage);
        socket.send(emptyBuffer);
    }
}
        res.status(200).json({ status: 200, "transcript": transcription });
      } catch (error) {
      // log error 
      logger.error('Issue in ', error);
          }
  })
}

Transcribe is dramatically slow

The text recognition is dramatically slow on the demo. It is far from real-time. Is there any way to make it closer to real-time transcribing?

Bug report: If websocket fails to open or anomalously closes before terminated, sample will still try send message.

var socket = new WebSocket(...)
if (socket.OPEN) {
// doStuff
}

will always evaluate to true. socket.OPEN is a constant that evaluates to 1 even when the socket is closed. To test correctly for whether a socket is open, use:

var socket = new WebSocket(...)
if (socket.OPEN === socket.readyState) {
// doStuff
}

I'll submit a pull request with fixes.
See here for the appropriate API usage:
https://developer.mozilla.org/en-US/docs/Web/API/WebSocket

Browserify issues

$ browserify main.js -o bundle.js [BABEL] Note: The code generator has deoptimised the styling of amazon-transcribe-websocket-static/dist/main.js as it exceeds the max of 500KB. Error: Cannot find module './audioUtils' from '/amazon-transcribe-websocket-static/dist' at /usr/local/lib/node_modules/browserify/node_modules/browser-resolve/node_modules/resolve/lib/async.js:55:21 at load (/usr/local/lib/node_modules/browserify/node_modules/browser-resolve/node_modules/resolve/lib/async.js:69:43) at onex (/usr/local/lib/node_modules/browserify/node_modules/browser-resolve/node_modules/resolve/lib/async.js:92:31) at /usr/local/lib/node_modules/browserify/node_modules/browser-resolve/node_modules/resolve/lib/async.js:22:47 at FSReqCallback.oncomplete (fs.js:166:21)

Did I do something wrong?

Store audio file into S3.

While live transcribe can we also store the audio files into S3 simultaneously? If yes, can you please point out where we need to configure bucket details in this library.

Not Working - Cannot read property 'getUserMedia' of undefined

Doesn't work straightaway on either Chrome or Firefox.
Screenshot 2020-08-22 01:36:07

I cloned the repo on an EC2 and followed the steps. Got the webserver working.
Nothing happened on clicking the Start button.
The URL is http not https.
Figured out that it's about Google Chrome/Firefox and Security. They force the SSL in WebSocket Connection and don't allow insecure WebSocket Connections and so clients will get an error in the WebRTC playing side.

Amazon Transcribe streaming with Node.js using websocket

I am working on a whatsapp chatbot where I receive audio file(ogg format) file url from Whatsapp and I get buffer and upload that file on S3(sample.ogg) Now what I want to use AWS Transcribe Streaming so I am creating readStream of file and sending to AWS transcribe I am using websocket but I am receiving Empty response of Sometimes when I Mhm mm mm response. Please can anyone tell what wrong I am doing in my code

const express = require('express')
const app = express()
const fs = require('fs');
const crypto = require('crypto'); // tot sign our pre-signed URL
const v4 = require('./aws-signature-v4'); // to generate our pre-signed URL
const marshaller = require("@aws-sdk/eventstream-marshaller"); // for converting binary event stream messages to and from JSON
const util_utf8_node = require("@aws-sdk/util-utf8-node");
var WebSocket = require('ws') //for opening a web socket
// our converter between binary event streams messages and JSON
const eventStreamMarshaller = new marshaller.EventStreamMarshaller(util_utf8_node.toUtf8, util_utf8_node.fromUtf8);

// our global variables for managing state
let languageCode;
let region = 'ap-south-1';
let sampleRate;
let inputSampleRate;
let transcription = "";
let socket;
let micStream;
let socketError = false;
let transcribeException = false;
// let languageCode = 'en-us'

app.listen(8081, (error, data) => {
if(!error) {
console.log(running at 8080----->>>>)
}
})

let handleEventStreamMessage = function (messageJson) {
let results = messageJson.Transcript.Results;

if (results.length > 0) {
    if (results[0].Alternatives.length > 0) {
        let transcript = results[0].Alternatives[0].Transcript;

        // fix encoding for accented characters
        transcript = decodeURIComponent(escape(transcript));

     console.log(`Transcpted is----->>${transcript}`)
    }
}

}

function downsampleBuffer (buffer, inputSampleRate = 44100, outputSampleRate = 16000){
if (outputSampleRate === inputSampleRate) {
return buffer;
}

var sampleRateRatio = inputSampleRate / outputSampleRate;
var newLength = Math.round(buffer.length / sampleRateRatio);
var result = new Float32Array(newLength);
var offsetResult = 0;
var offsetBuffer = 0;

while (offsetResult < result.length) {

    var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);

    var accum = 0,
    count = 0;

    for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++ ) {
        accum += buffer[i];
        count++;
    }

    result[offsetResult] = accum / count;
    offsetResult++;
    offsetBuffer = nextOffsetBuffer;

}

return result;

}

function pcmEncode(input) {
var offset = 0;
var buffer = new ArrayBuffer(input.length * 2);
var view = new DataView(buffer);
for (var i = 0; i < input.length; i++, offset += 2) {
var s = Math.max(-1, Math.min(1, input[i]));
view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
}
return buffer;
}

function getAudioEventMessage(buffer) {
// wrap the audio data in a JSON envelope
return {
headers: {
':message-type': {
type: 'string',
value: 'event'
},
':event-type': {
type: 'string',
value: 'AudioEvent'
}
},
body: buffer
};
}

function convertAudioToBinaryMessage(raw) {

if (raw == null)
    return;

// downsample and convert the raw audio bytes to PCM
let downsampledBuffer = downsampleBuffer(raw, inputSampleRate);
let pcmEncodedBuffer =  pcmEncode(downsampledBuffer);
setTimeout(function() {}, 1);
// add the right JSON headers and structure to the message
let audioEventMessage = getAudioEventMessage(Buffer.from(pcmEncodedBuffer));

//convert the JSON object + headers into a binary event stream message
let binary = eventStreamMarshaller.marshall(audioEventMessage);

return binary;

}

function createPresignedUrl() {
let endpoint = "transcribestreaming." + "us-east-1" + ".amazonaws.com:8443";

// get a preauthenticated URL that we can use to establish our WebSocket
return v4.createPresignedURL(
    'GET',
    endpoint,
    '/stream-transcription-websocket',
    'transcribe',
    crypto.createHash('sha256').update('', 'utf8').digest('hex'), {
        'key': <AWS_KEY>,
        'secret': <AWS_SECRET_KEY>,
        'protocol': 'wss',
        'expires': 15,
        'region': 'us-east-1',
        'query': "language-code=" + 'en-US' + "&media-encoding=pcm&sample-rate=" + 8000
    }
);

}

function showError(message) {
console.log("Error: ",message)
}

app.get('/convert', (req, res) => {
var file = 'recorded.mp3'
const eventStreamMarshaller = new marshaller.EventStreamMarshaller(util_utf8_node.toUtf8, util_utf8_node.fromUtf8);
let url = createPresignedUrl();
let socket = new WebSocket(url);
socket.binaryType = "arraybuffer";
let output = '';
const readStream = fs.createReadStream(file, { highWaterMark: 32 * 256 })
readStream.setEncoding('binary')
//let sampleRate = 0;
let inputSampleRate = 44100
readStream.on('end', function() {
console.log('finished reading----->>>>');
// write to file here.
// Send an empty frame so that Transcribe initiates a closure of the WebSocket after submitting all transcripts
let emptyMessage = getAudioEventMessage(Buffer.from(new Buffer([])));
let emptyBuffer = eventStreamMarshaller.marshall(emptyMessage);
socket.send(emptyBuffer);
})

// when we get audio data from the mic, send it to the WebSocket if possible
  socket.onopen = function() {
    readStream.on('data', function(chunk) {
     let binary = convertAudioToBinaryMessage(chunk);
     if (socket.readyState === socket.OPEN) {
         console.log(`sending to steaming API------->>>>`)
         socket.send(binary);
     }     
    });
        // the audio stream is raw audio bytes. Transcribe expects PCM with additional metadata, encoded as binary
    }
            // the audio stream is raw audio bytes. Transcribe expects PCM with additional metadata, encoded as binary


    socket.onerror = function () {
        socketError = true;
        showError('WebSocket connection error. Try again.');

    };

      // handle inbound messages from Amazon Transcribe
socket.onmessage = function (message) {
    //convert the binary event stream message to JSON
    let messageWrapper = eventStreamMarshaller.unmarshall(Buffer(message.data));
    //console.log(`messag -->>${JSON.stringify(messageWrapper)}`)
    let messageBody = JSON.parse(String.fromCharCode.apply(String, messageWrapper.body));
    console.log("results:.. ",JSON.stringify(messageBody))
    if (messageWrapper.headers[":message-type"].value === "event") {
        handleEventStreamMessage(messageBody);
    }
    else {
        transcribeException = true;
        showError(messageBody.Message);

}

}

let closeSocket = function () {
if (socket.OPEN) {

    // Send an empty frame so that Transcribe initiates a closure of the WebSocket after submitting all transcripts
    let emptyMessage = getAudioEventMessage(Buffer.from(new Buffer([])));
    let emptyBuffer = eventStreamMarshaller.marshall(emptyMessage);
    socket.send(emptyBuffer);
}

}

})

sample rate for EN_GB

Firstly, thanks to Karan for such a beautiful demo. I look forward to wading through the code in more detail later!

This isn't Karan's fault, the AWS docs state throughout that 16000Hz is allowed for EN_GB, but I have found that it accepts only 8000Hz for that ISO code.

It's a very small point, and when the transcribe team update the capabilities for EN_GB to match the docs, this will go away, but it might be worth mentioning that for anyone who plans to implement this wonderful tutorial, they should expect an error from the WS if using EN_GB and a sample rate of more than 8000 at this time. However EN_US does an amazing job of recognising EN_GB speech.

function setLanguage() {
    languageCode = $('#language').find(':selected').val();
    if (languageCode == "en-US" || languageCode == "es-US")
        sampleRate = 44100;
    else if(languageCode == "en-GB")
        sampleRate = 8000; 
    else
        sampleRate = 16000;
}

not working on mobile browsers

this is working great on desktop browsers, but how can we get the mic access on a mobile browser?i tried it on ios and it did not work

Getting "WebSocket connection error. Try again."

Getting this error "WebSocket connection error. Try again." when I click the stop button with valid Access Id and Secret Key.

In chrome console, it shows, main.js:157 Uncaught DOMException: Failed to execute 'send' on 'WebSocket': Still in CONNECTING state.
at closeSocket

main.js:61 WebSocket connection to 'wss://transcribestreaming.ap-south-1.amazonaws.com:8443/stream-transcription-websocket?----&X-Amz-SignedHeaders=host&language-code=en-US&media-encoding=pcm&sample-rate=44100' failed: Error in connection establishment: net::ERR_NAME_NOT_RESOLVED

Handling credentials

Hi,

In order to prevent hardcoding keys into public code (as in the example), I'm trying to leverage unauthenticated Cognito sessions in a similar fashion to how you might do it for Polly and I've attached the appropriate policy to the unauthed IAM role. However, using the access key and secret from the credentials object gives me a The security token included in the request is invalid. error (despite it working for Polly)

Am I doing something wrong?

Streaming audio mic data to aws transcribe in node

I'm attempting to write a node application that transcribes audio from a microphone via AWS' streaming transcription service, it's heavily based off of the amazon-transcribe-websocket-static example. What I have so far can be found in this repository (it's small).

Unfortunately the above doesn't work. I believe there's a bug in taking the data provided by the microphone stream and transforming it before passing it to the writable transcriber stream. This is because I have proven that the other two components of the app work

  1. I've written a piece of the app to pipe the mic to the speakers that proves that the mic stream works as expected.
  2. When sending requests over the WebSocket to the transcription service, it sends non-exceptional responses back, albeit empty, proving that the transcription service client works as expected.

As a side note, I'm not familiar with handling audio data and encoding (decoding?) it to PCM. I'm not even positive if what the mic-stream is giving me is PCM or not and if I need to decode from or encode to PCM before providing it to the transcription service. All of this is to say, I'm pretty sure the byte-handling is the issue.

Any help getting this sorted would be greatly appreciated.

Thanks,
Geoff

Attach ID to audio data?

Hi, I need to let my users insert text in a page containing several text fields.

They should be able to dictate inside any of them, and switch between them by focusing a different text field.

Is there any way to attach some information to the audio data sent to the AWS Transcribe service in order to know where to put the transcriptions when it's received?

What I'm trying to avoid is that a user switches between different text fields while he's talking and I end up appending the transcription to the current text field rather than the one that was active when the user spoke that sentence.

What's the best approach to handle this case?

Record Audio files too

Can I get recorded audio too using this?

Actually I want to build an application where we will provide the paragraph to speak for the user and we are going to record the audio of user and extract the text from that audio and match extracted data with our paragraph to check accuracy.

So will I get the recorded audio of user along with text?

Clock Drift

Hi,

When trying to get a presigned URL using this example on a system with an incorrect time you get an invalid signature error. It looks like this is something the correctClockSkew option would usually handle in the SDK. Do you happen to have some pseudo-code that would aid in manually implementing similar functionality?

Request has expired

The moment I run the transcription demo, I get the following error:
Request has expired
Please advise?

Websocket connection closed with "Streaming completed successfully"

Hi, are there any resources on the websocket closure reasons? As soon as I send the first bit of data the websocket is being closed with the reason "Streaming completed successfully".

It would make it easier to debug if I understood why that might be happening, is anyone able to share any insights?

Does it support speaker identification?

Sorry to open this as an issue. I had to reach out somehow.

My question is - does it support speaker identification for real-time socket-based transcribe service?

Thanks

Higher sample rate breaks stream

Hey @brandonmwest, thanks for the work here.

I've encountered a pretty strange bug on Mac here. If my Scarlett 2I20 is plugged in, ALL my media streams are reported as having a bitrate that is too high for streaming. I get results, but they are all empty. I need to unplug my Scarlett to get this demo to work, then it's fine.

This specific demo does not report an error, I was only able to diagnose this because it happens on the AWS portal demo as well.

Not sure what's causing this yet, but will update this issue as needed. My first suspicion is that the demo is not respecting the selected mediaStream, but will need more research.

Firefox can’t establish a connection to the server

I tried to setup this application following the readme. I gathered my Access Key, Secret Access Key and Session Token using sts get-session-token command. The error I get is "Firefox can’t establish a connection to the server". Previously I added region "eu-central-1" to the list of regions, because I am allowed to use only this region and I need to use MFA. What can be the problem? Thanks!

Uncaught TypeError: tslib_1.__exportStar is not a function

When I try to call your script by $.getScript() jQuery function.
$.getScript( 'https://transcribe-websockets.go-aws.com/dist/main.js' );
I've got this error in browser log:

index.js:4 Uncaught TypeError: tslib_1.__exportStar is not a function
at Object.8../EventStreamMarshaller (index.js:4)
at o (_prelude.js:1)
at _prelude.js:1
at Object. (main.js:4)
at Object.3../audioUtils (main.js:244)
at o (_prelude.js:1)
at r (_prelude.js:1)
at _prelude.js:1

Why it is happen? I have no this issue with

<script src="https://transcribe-websockets.go-aws.com/dist/main.js"></script>

Which modules to "browserify"?

I have an existing React JS app I'd like to tinker with by including this demo. The ReadMe mentions some of the modules/libraries that you used dictated the need to Browserify. Can you confirm which ones specifically? Seems like the AWS SDK works ok with React, and there's a crypto-browserify version/fork of crypto.

Access to Transcription Job

Hi,

How can I access the transcription job or JSON file generated by real time transcribe?
It is not stored in my amazon transcribe service (aws console).

Thanks in Advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.