Coder Social home page Coder Social logo

googleapis / nodejs-speech Goto Github PK

View Code? Open in Web Editor NEW
686.0 74.0 290.0 11.71 MB

This repository is deprecated. All of its content and history has been moved to googleapis/google-cloud-node.

Home Page: https://cloud.google.com/speech/

License: Apache License 2.0

nodejs machine-learning speech-to-text speech

nodejs-speech's Introduction

THIS REPOSITORY IS DEPRECATED. ALL OF ITS CONTENT AND HISTORY HAS BEEN MOVED TO GOOGLE-CLOUD-NODE

Google Cloud Platform logo

release level npm version

Cloud Speech Client Library for Node.js

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable the Cloud Speech API.
  3. Set up authentication with a service account so you can access the API from your local workstation.

Installing the client library

npm install @google-cloud/speech

Using the client library

// Imports the Google Cloud client library
const speech = require('@google-cloud/speech');

// Creates a client
const client = new speech.SpeechClient();

async function quickstart() {
  // The path to the remote LINEAR16 file
  const gcsUri = 'gs://cloud-samples-data/speech/brooklyn_bridge.raw';

  // The audio file's encoding, sample rate in hertz, and BCP-47 language code
  const audio = {
    uri: gcsUri,
  };
  const config = {
    encoding: 'LINEAR16',
    sampleRateHertz: 16000,
    languageCode: 'en-US',
  };
  const request = {
    audio: audio,
    config: config,
  };

  // Detects speech in the audio file
  const [response] = await client.recognize(request);
  const transcription = response.results
    .map(result => result.alternatives[0].transcript)
    .join('
');
  console.log(`Transcription: ${transcription}`);
}
quickstart();

Samples

Samples are in the samples/ directory. Each sample's README.md has instructions for running its sample.

Sample Source Code Try it
Quickstart source code Open in Cloud Shell

The Cloud Speech Node.js Client API Reference documentation also contains samples.

Supported Node.js Versions

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.

Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:

  • Legacy versions are not tested in continuous integration.
  • Some security patches and features cannot be backported.
  • Dependencies cannot be kept up-to-date.

Client libraries targeting some end-of-life versions of Node.js are available, and can be installed through npm dist-tags. The dist-tags follow the naming convention legacy-(version). For example, npm install @google-cloud/speech@legacy-8 installs client libraries for versions compatible with Node.js 8.

Versioning

This library follows Semantic Versioning.

This library is considered to be stable. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against stable libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its templates in directory.

License

Apache Version 2.0

See LICENSE

nodejs-speech's People

Contributors

alexander-fenster avatar b-loved-dreamer avatar bcoe avatar bradmiro avatar callmehiphop avatar crwilcox avatar danielruf avatar dpebot avatar fhinkel avatar galz10 avatar gcf-owl-bot[bot] avatar gguuss avatar greenkeeper[bot] avatar happyhuman avatar jerjou avatar jkwlui avatar jmdobry avatar jmuk avatar justinbeckwith avatar lukesneeringer avatar munkhuushmgl avatar nirupa-kumar avatar release-please[bot] avatar renovate-bot avatar renovate[bot] avatar sofisl avatar stephenplusplus avatar summer-ji-eng avatar vijay-qlogic avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodejs-speech's Issues

An in-range update of sinon is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 4.4.3 of sinon was just published.

Branch Build failing 🚨
Dependency sinon
Current Version 4.4.2
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

sinon is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ci/circleci: node9 Your tests failed on CircleCI Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node6 Your tests passed on CircleCI! Details
  • ci/circleci: node4 Your tests passed on CircleCI! Details
  • continuous-integration/appveyor/branch AppVeyor build succeeded Details

Commits

The new version differs by 21 commits.

  • 6de1cbd Update docs/changelog.md and set new release id in docs/_config.yml
  • cecbc46 Add release documentation for v4.4.3
  • 17b052f 4.4.3
  • 5b01728 Update History.md and AUTHORS for new release
  • f01d847 Fix inconsistent newline usage %D
  • e9aa877 Fix missed switch from referee to @sinonjs/referee
  • a0e200e Add subdir eslintrc for mjs
  • 7af0579 remove unnecessary properties quoting
  • dc895fc debounce function call
  • d7fb7d5 Merge pull request #1715 from sinonjs/sinon-es6-module-detection
  • 51cdafe Add linting for ES Modules
  • 6959188 Add detection of ES Modules to spies and lots of tests
  • f6b89a1 Extract ES Module detection and improve error
  • 3ede6ee Throw meaningful error stubbing ECMAScript Module
  • b491a57 Replace referee dependency with @sinonjs/referee

There are 21 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Can't make longRunningRecognize to work

Environment details

  • OS: Windows 10 and Linux
  • Node.js version: 6.9.1
  • npm version: 3.10.8
  • @google-cloud/speech version: 1.0.1

Steps to reproduce

  1. Using :
    var config = {
        encoding: 'FLAC',
        languageCode: 'en-US',
        sampleRateHertz: 32000
    };
    // The name of the audio file to transcribe
    var audio = {
        uri : 'https://nofile.io/f/pDrSFMuSfVh/2f996778-cc3e-496f-a4fb-9e97bb34a040.flac',
    }
    var speechRequest = {
      config: config,
      audio: audio,
    };
    clientSpeech.longRunningRecognize(speechRequest)
        .then(function(transcript) {
            console.log("success transcript",transcript);
        }).catch(err => {
            console.error("transcript error", err);
        });
  1. Here is the output :
transcript error { Error: Request contains an invalid argument.
    at C:\wamp\www\video-recorder\node_modules\grpc\src\client.js:554:15 code: 3
, metadata: Metadata { _internal_repr: {} } }

When I test with a local file (less than 60 seconds), and using recognize it's working.
NB: I posted here the audio file in nofile.io where you can download the file, to not include the real location of the file.

Thanks!

Unable to capture audio from microphone on Cloud Speech API

Once node recognize.js listen is executed, it stops immediately but no error message is displayed. The input should be coming from a microphone. And the main purpose is to transcribe the audio from the microphone using Node.js.

A. Details

  • OS: Windows 7
  • Node js version: 4.5.0
  • npm version: 2.15.9
  • google-cloud/speech version: 1.0.0 (this version I found In the package.json file in (node_module/@google-cloud/speech/package.json)

B. Steps to reproduce

  1. Create project in Google Cloud Console
  2. Enable Google Cloud Speech API service
  3. Create the service account
  4. Download the private key as JSON
  5. Download and install the Google Cloud SDK
  6. Clone the repository: https://github.com/googleapis/nodejs-speech
  7. Set the path in the Google SDK to samples folder where recognize.js
    was there.
  8. Install the required libraries with the command sudo apt-get install sox libsox-fmt-all.
  9. Run the following command to set the credentials : export GOOGLE_APPLICATION_CREDENTIALS="path/to/service_account.json”
  10. And in recognize.js file try to change the recordProgram parameter value to sox if it doesn’t work change it rec or arecord.
  11. Executed the code with node recognize.js listen
  12. It suddenly stops and it only displays, Listening, press Ctrl+C to stop.

Thank you so much!

Save speech to text in local using node js

I'm trying to replicate the code given at https://github.com/googleapis/nodejs-speech/blob/master/samples/recognize.js. There is no error when I run it locally. But here I'm confused on where can I see the result that is created. Is there a way that I can write the result to a file?

Here is the code.

const record = require('node-record-lpcm16');

// Imports the Google Cloud client library
const speech = require('@google-cloud/speech');

// Creates a client
const client = new speech.SpeechClient();

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';

const request = {
    config: {
        encoding: encoding,
        sampleRateHertz: sampleRateHertz,
        languageCode: languageCode,
    },
    interimResults: false, // If you want interim results, set this to true
};

// Create a recognize stream
const recognizeStream = client
    .streamingRecognize(request)
    .on('error', console.error)
    .on('data', data =>
        process.stdout.write(
            data.results[0] && data.results[0].alternatives[0] ?
            `Transcription: ${data.results[0].alternatives[0].transcript}\n` :
            `\n\nReached transcription time limit, press Ctrl+C\n`
        )
    );

// Start recording and send the microphone input to the Speech API
record
    .start({
        sampleRateHertz: sampleRateHertz,
        threshold: 0,
        // Other options, see https://www.npmjs.com/package/node-record-lpcm16#options
        verbose: false,
        recordProgram: 'sox', // Try also "arecord" or "sox"
        silence: '10.0',
    })
    .on('error', console.error)
    .pipe(recognizeStream);

console.log('Listening, press Ctrl+C to stop.');

This is very confusing :(. please let me know how can I achieve this.

Thanks

Error when trying to run recognize

TypeError: require(...).demand(...).command(...).command(...).command(...).command(...).command(...).command(...).command(...).command(...).options(...).example(...).example(...).example(...).example(...).wrap(...).recommendCommands is not a function
at Object. (/Users/ngfamily/Desktop/VOICE/recognize.js:581:4)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Function.Module.runMain (module.js:693:10)
at startup (bootstrap_node.js:191:16)
at bootstrap_node.js:612:3

Error: Endpoint read failed

  • OS: CentOs 6.7
  • Node.js version: v6.12.0
  • npm version: 5.5.1
  • @google-cloud/speech version: ^1.1.0

Error:

events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: Endpoint read failed
    at ClientDuplexStream._emitStatusIfDone (/home/nirav/project/node_modules/grpc/src/client.js:255:19)
    at ClientDuplexStream._receiveStatus (/home/nirav/project/node_modules/grpc/src/client.js:233:8)
    at /home/nirav/project/node_modules/grpc/src/client.js:757:12

I am converting socket streaming audio to text. but when socket get closed some time it throw above error and stop application. even i can not handle it in try catch. Everything else works like charm except it throws above error randomly after socket connection close.

Below are basic code lines will get executed once socket connection accepted.

speech_to_text = new SpeechToText.SpeechClient({projectId: config.gce.projectId});
initSpeechToText = speech_to_text.streamingRecognize(self.recognizeStreamOpt).on.........
stream.pipe(initSpeechToText);

NOTE: i do re-initialize initSpeechToText after 65 sec. as google only support up to 65 sec of continuous stream.

Authenticating via API key not supported?

Hello!

It seems that authenticating with an API key is not supported in this client. But while the docs for nodejs-speech don't mention it as an option, the general google-cloud-node docs on authentication suggest you can authenticate with an API key for "any APIs that accept an API key." Following the pattern of passing a key attribute to the constructor (I tried with both @google-cloud/speech and the meta google-cloud package) gives Error: Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information.

Are there any plans to make this possible for the speech client? Otherwise, perhaps that language should be adjusted.

Thanks!

Environment details

  • OS: macOS 10.13.3
  • Node.js version: 9.4.0
  • npm version: 5.6.0
  • @google-cloud/speech version: 1.0.1

Steps to reproduce

  1. Create project, enable Google Speech API, generate API key with no restrictions.
  2. Run code like
import speech from '@google-cloud/speech';
const sc = new speech.v1.SpeechClient({
  projectId: process.env.GCLOUD_PROJECT_ID,
  key: process.env.GCLOUD_API_KEY,
});
sc.getProjectId((err, str) => {
  console.log(err);
});

Poll for longRunningRecognize result from another process?

I am looking for a way to poll for the status of a longRunningRecognize() operation from another process.

The Usecase is processing very long audiofiles, when more often than not, the polling within .promise() fails and the state (and thus the whole request) is lost. If I had the ability to poll for that status using some serialized state of the original request, I would still be able to retrieve the results.

In other words: I'd like to be able to poll for the status (and retrieve the eventual results) of a long running operation even if the process that started the operation has died.

Is that possible? Can somebody point me into the right direction?

client.recognize doesn't return result nor error

Environment details

  • OS: OSX 10.12.6
  • Node.js version: v4.3.2
  • npm version: 2.14.12
  • @google-cloud/speech version: 0.11.0

Steps to reproduce

NOTE: In the example I am using keyFilePath but I also tried export GOOGLE_APPLICATION_CREDENTIALS=/path/to/file

Code

const audio = {
          content: base64AudioFileContents,
        };

        const client = new speech.SpeechClient({
          projectId: 'project-id',
          keyFilePath: 'path/to/file.json'
        });

        const config = {
          encoding: 'LINEAR16',
          sampleRateHertz: 16000,
          languageCode: 'en-US',
        };

        const request = {
          audio: audio,
          config: config
        };

        client
          .recognize(request)
          .then(data => {
            console.log(data);
            const response = data[0];
            const transcription = response.results
              .map(result => result.alternatives[0].transcript)
              .join('\n');
            
              console.log(data)
          })
          .catch(err => {
            console.log(error);
          });

Unhandled 'error' event crash

Environment details

  • OS: Debian 8.10
  • Node.js version: v8.10.0
  • npm version: 5.6.0
  • @google-cloud/speech version: 1.4.0
Target

Get continuous transcriptions from an audio stream which length is undefined.

NOTE: I am aware of the quotas and limits for the speech recognition service.

Observations

As shown in the shared code, the streamingRecognize() write steam is re-generated on every data event which reports an error (typically being: exceeded maximum allowed stream duration of 65 seconds).

After some time (usually less than 5 minutes) the following unhandled exception is thrown which stops the application completely:

events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: 14 UNAVAILABLE: 502:Bad Gateway
    at createStatusError (/service/node_modules/grpc/src/client.js:64:15)
    at ClientDuplexStream._emitStatusIfDone (/service/node_modules/grpc/src/client.js:270:19)
    at ClientDuplexStream._receiveStatus (/service/node_modules/grpc/src/client.js:248:8)
    at /service/node_modules/grpc/src/client.js:804:12

The logs clearly point to grpc.

Questions/Concerns

My main question is: Is it actually possible to achieve continuous transcriptions of undefined audio lengths by using StreamingRecognize or any other ways provided by this service?

If there is a way to achieve this with StreamingRecognize.How can the exposed error be avoided, or achieved in any other way?

Thanks.

Code that reproduces the crash

const speech = require('@google-cloud/speech');

class GoogleSpeech
{
	constructor({ languageCode = 'en-US' })
	{
		logger.debug('constructor()');

		// Google Speech client.
		this._client = new speech.SpeechClient();

		// Google Speech configuration request.
		this._request =
		{
			config : {
				encoding              : 'LINEAR16',
				sampleRateHertz       : 16000,
				enableWordTimeOffsets : true,
				languageCode
			},
			// 'true' to perform continuous recognition even if the user pauses speaking.
			singleUtterance : false,
			// 'true' to enable tentative hypoteses.
			interimResults  : true
		};

		// Plain audio readable stream.
		this._audioStream = null;
	}

	/**
	 * @param {Readable} audioStream
	 */
	start(audioStream)
	{
		logger.debug('start()');

		this._audioStream = audioStream;

		this._start();
	}

	stop()
	{
		logger.debug('stop()');
	}

	_start()
	{
		logger.debug('_start()');

		try
		{
			// Create a writable stream to which pipe the plain audio.
			this._recognizeStream = this._client.streamingRecognize(this._request);
		}
		catch (error)
		{
			logger.error('streamingRecognize() error: [%s]', error.message);

			return;
		}

		this._recognizeStream
			.on('error', (error) =>
			{
				logger.error('streamingRecognize() "error" event [%s]', error.message);
				this._audioStream.unpipe(this._recognizeStream);
			})
			.on('data', (data) =>
			{
				if (data.error)
					logger.error('streamingRecognize() "data" event error [%s]', data.error);

				else
					logger.debug(data.results[0].alternatives[0].transcript);
			})
			.on('unpipe', () =>
			{
				delete this._recognizeStream;

				this._start();
			});

		// Pipe the audio stream into the Speech API.
		this._audioStream.pipe(this._recognizeStream);
	}
}

Can't get the package to work on a Raspberry Pi Zero

I run into an "illegal instruction" error when requiring the package in a node file. I have been able to successfully use the library on a RPI 3, but not on the RPi zero. I believe this problem is likely related to the RPi zero's different processor architecture (ARMV6).

Environment details

  • OS: Raspbian 9
  • Node.js version: 8.9.3, 8.0.0, or 7.10.1 (same error with each version)
  • npm version: 5.5.1
  • @google-cloud/speech version: 0.10.3 or 1.0.0 (same error with both)

Steps to reproduce

  • install node on a RPI zero and run the demo sample

isFinal missing on streamingRecognize

Sometime the Speech API stuck when I say only one word using streaming recognize. The API recognize the end of the sentence as I receive correctly END_OF_SINGLE_UTTERANCE, but I never receive the transcription with isFinal=true.

This is a big problem for me as I use isFinal to reload the API connection. I can reproduce the issue on both API v1 and v1p1beta1.

{ config:
   { encoding: 1,
     sampleRateHertz: 8000,
     languageCode: 'fr-FR',
     maxAlternatives: 0,
     profanityFilter: true },
  singleUtterance: true,
  interimResults: true }

long sentence:

{"results":[{"alternatives":[{"words":[],"transcript":"pour","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"Bonjour","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça m'a","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça marche","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce","confidence":0}],"isFinal":false,"stability":0.8999999761581421},{"alternatives":[{"words":[],"transcript":" que ça marche","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que","confidence":0}],"isFinal":false,"stability":0.8999999761581421},{"alternatives":[{"words":[],"transcript":" ça marche bien","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça","confidence":0}],"isFinal":false,"stability":0.8999999761581421},{"alternatives":[{"words":[],"transcript":" marche bien","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça marche","confidence":0}],"isFinal":false,"stability":0.8999999761581421},{"alternatives":[{"words":[],"transcript":" bien","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça marche bien","confidence":0}],"isFinal":false,"stability":0.8999999761581421}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[],"error":null,"speechEventType":"END_OF_SINGLE_UTTERANCE"}
{"results":[{"alternatives":[{"words":[],"transcript":"bonjour est-ce que ça marche bien","confidence":0.9081912636756897}],"isFinal":true,"stability":0}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}

one word sentence:

{"results":[{"alternatives":[{"words":[],"transcript":"un","confidence":0}],"isFinal":false,"stability":0.009999999776482582}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[{"alternatives":[{"words":[],"transcript":"un","confidence":0}],"isFinal":false,"stability":0.8999999761581421}],"error":null,"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{"results":[],"error":null,"speechEventType":"END_OF_SINGLE_UTTERANCE"}
{"results":[],"error":{"details":[],"code":11,"message":"Exceeded maximum allowed stream duration of 65 seconds."},"speechEventType":"SPEECH_EVENT_UNSPECIFIED"}
{ Error: 11 OUT_OF_RANGE: Exceeded maximum allowed stream duration of 65 seconds.
    at createStatusError (node_modules/grpc/src/client.js:64:15)
    at ClientDuplexStream._emitStatusIfDone (node_modules/grpc/src/client.js:270:19)
    at ClientDuplexStream._receiveStatus (node_modules/grpc/src/client.js:248:8)
    at node_modules/grpc/src/client.js:804:12
  code: 11,
  metadata:
   Metadata {
     _internal_repr: { 'content-disposition': [Array], 'x-goog-trace-id': [Array] } },
  details: 'Exceeded maximum allowed stream duration of 65 seconds.' }

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

speechEventType: 'SPEECH_EVENT_UNSPECIFIED'

let options = {
  config: {
    encoding: 'LINEAR16',
    languageCode: 'en-US',
    sampleRateHertz: 16000,
    enableWordTimeOffsets: true,
    model: "default"
  },
  singleUtterance: true,
  interimResults: true,
  verbose: true
};

// handle client connections
server
.on('error', (error) => { console.log('Server error:' + error); })
.on('close', () => { console.log('Server closed'); })
.on('connection', (client) => {
  client
  .on('error', (error) => { console.log('Client error: ' + error); })
  .on('close', () => { console.log('Client closed.'); })
  .on('stream', (clientStream, meta) => {
    console.log('New Client: ' + JSON.stringify(meta));

    if (meta.type === 'speech') {
      handleSpeechRequest(client, clientStream, meta);
    } else {
      handleRandomUtteranceRequest(client);
    }
  });
});

function handleSpeechRequest(client, clientStream, meta) {
  debugger
  options.config.sampleRateHertz = meta.sampleRate;

  let speechStream = speechClient.streamingRecognize(options)
  .on('error', (data) => { handleGCSMessage(data, client, speechStream); })
  .on('data', (data) => { handleGCSMessage(data, client, speechStream); })
  .on('close', () => { client.close(); });

  clientStream.pipe(speechStream);
}

function handleRandomUtteranceRequest(client) {
  let data = getRandomSentence();
  console.log(data);

  try {
    client.send(data);
  } catch (ex) {
    console.log('Failed to send message back to client...Closed?');
  }
}

function handleGCSMessage(data, client, speechStream) {
  debugger;
  if (client && client.streams[0] &&
      client.streams[0].writable && !client.streams[0].destroyed) {
    try {
      console.log(data);

      client.send(data);
    } catch (ex) {
      console.log('Failed to send message back to client...Closed?');
    }
    if (data.error || data.Error) {
      try {
        speechStream.end();
        speechStream = null;
        client.close();
        client = null;
      } catch (ex) {
        console.log('ERROR closing the streams after error!');
      }
    }
  }
}

I am getting error please help me
the error is

{ results:
   [ { alternatives: [Array],
       isFinal: false,
       stability: 0.8999999761581421 } ],
  error: null,
  speechEventType: 'SPEECH_EVENT_UNSPECIFIED' }

Make it work with Angular

Hi,

I installed the npm package:

npm install --save @google-cloud/speech

I m curious to know if it possible to make it work with Angular 5 and how to do it?

I read an article about including a js library in an angular project:
https://hackernoon.com/how-to-use-javascript-libraries-in-angular-2-apps-ff274ba601af

But when I add this code:

import * as _ from '@google-cloud/speech';
@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent {
  title = 'app';

  constructor() {
   const client = new _.speech.SpeechClient();
}

With npm start I have this error:

ERROR in ./node_modules/google-gax/lib/operations_client.js
Module not found: Error: Can't resolve './operations_client_config' in 'GoogleSpeechTest\src\client\node_modules\google-gax\lib'
ERROR in ./node_modules/google-gax/index.js
Module not found: Error: Can't resolve './package' in 'GoogleSpeechTest\src\client\node_modules\google-gax'
ERROR in ./node_modules/@google-cloud/speech/src/v1/speech_client.js
Module not found: Error: Can't resolve './speech_client_config' in 'GoogleSpeechTest\src\client\node_modules@google-cloud\speech\src\v1'
ERROR in ./node_modules/@google-cloud/speech/src/v1p1beta1/speech_client.js
Module not found: Error: Can't resolve './speech_client_config' in 'GoogleSpeechTest\src\client\node_modules@google-cloud\speech\src\v1p1beta1'
ERROR in ./node_modules/gtoken/node_modules/mime/index.js
Module not found: Error: Can't resolve './types/other' in 'GoogleSpeechTest\src\client\node_modules\gtoken\node_modules\mime'
ERROR in ./node_modules/gtoken/node_modules/mime/index.js
Module not found: Error: Can't resolve './types/standard' in 'GoogleSpeechTest\src\client\node_modules\gtoken\node_modules\mime'
ERROR in ./node_modules/detect-libc/lib/detect-libc.js
Module not found: Error: Can't resolve 'child_process' in 'GoogleSpeechTest\src\client\node_modules\detect-libc\lib'
ERROR in ./node_modules/google-auth-library/build/src/auth/googleauth.js
Module not found: Error: Can't resolve 'child_process' in 'GoogleSpeechTest\src\client\node_modules\google-auth-library\build\src\auth'

An in-range update of eslint is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 4.19.0 of eslint was just published.

Branch Build failing 🚨
Dependency eslint
Current Version 4.18.2
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

eslint is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node9 Your tests passed on CircleCI! Details
  • ci/circleci: node6 Your tests passed on CircleCI! Details
  • ci/circleci: node4 Your tests passed on CircleCI! Details
  • ci/circleci: docs Your tests failed on CircleCI Details
  • ci/circleci: lint Your tests passed on CircleCI! Details

Release Notes v4.19.0
  • 55a1593 Update: consecutive option for one-var (fixes #4680) (#9994) (薛定谔的猫)
  • 8d3814e Fix: false positive about ES2018 RegExp enhancements (fixes #9893) (#10062) (Toru Nagashima)
  • 935f4e4 Docs: Clarify default ignoring of node_modules (#10092) (Matijs Brinkhuis)
  • 72ed3db Docs: Wrap Buffer() in backticks in no-buffer-constructor rule description (#10084) (Stephen Edgar)
  • 3aded2f Docs: Fix lodash typos, make spacing consistent (#10073) (Josh Smith)
  • e33bb64 Chore: enable no-param-reassign on ESLint codebase (#10065) (Teddy Katz)
  • 66a1e9a Docs: fix possible typo (#10060) (Vse Mozhet Byt)
  • 2e68be6 Update: give a node at least the indentation of its parent (fixes #9995) (#10054) (Teddy Katz)
  • 72ca5b3 Update: Correctly indent JSXText with trailing linebreaks (fixes #9878) (#10055) (Teddy Katz)
  • 2a4c838 Docs: Update ECMAScript versions in FAQ (#10047) (alberto)
Commits

The new version differs by 12 commits.

  • 4f595e8 4.19.0
  • 16fc59e Build: changelog update for 4.19.0
  • 55a1593 Update: consecutive option for one-var (fixes #4680) (#9994)
  • 8d3814e Fix: false positive about ES2018 RegExp enhancements (fixes #9893) (#10062)
  • 935f4e4 Docs: Clarify default ignoring of node_modules (#10092)
  • 72ed3db Docs: Wrap Buffer() in backticks in no-buffer-constructor rule description (#10084)
  • 3aded2f Docs: Fix lodash typos, make spacing consistent (#10073)
  • e33bb64 Chore: enable no-param-reassign on ESLint codebase (#10065)
  • 66a1e9a Docs: fix possible typo (#10060)
  • 2e68be6 Update: give a node at least the indentation of its parent (fixes #9995) (#10054)
  • 72ca5b3 Update: Correctly indent JSXText with trailing linebreaks (fixes #9878) (#10055)
  • 2a4c838 Docs: Update ECMAScript versions in FAQ (#10047)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Simplify examples in documentation

Hi,

Some examples (two actually) in the repository still make use of the unnecessary projectId argument when instantiating a SpeechClient, which can somehow be a bit confusing when starting to use the client.

I'm issuing a PR to propose a fix that instantiates SpeechClient without any argument, thus following the examples in samples/recognize.js and samples/recognize.v1p1beta1.js.

Also, the page at the URL https://cloud.google.com/nodejs/docs/reference/speech/latest/ (linked in README.md) should be updated :

Thanks a lot for this great program, we're loving it !

How to get speakertag from longRunningRecognize?

Environment details

  • OS: Mac
  • Node.js version: v8.11.1
  • npm version: 5.6.0
  • @google-cloud/speech version: 1.5.0

Steps to reproduce

  1. Run any longRunningRecognize process
  2. In the response there is no information about speakertag

This is the code i'm using

    const speech = require('@google-cloud/speech');

    const client = new speech.SpeechClient({
      projectId: Meteor.settings.private.googleCloud.projectId,
      keyFilename: getFilePath('google-cloud.json')
    });

	const config = {
      "enableWordTimeOffsets": true,
      "encoding": "WAV",
      "languageCode": "en-US",
      "sampleRateHertz": 44100,
      "model": "video"
    };

    const audio = {
      uri: "gs://my-project-name/jeff_bezos_1_mono.wav",
    };

    const options = {
      config: config,
      audio: audio,
    };

    client
      .longRunningRecognize(options)
      .then(data => {
        const operation = data[0];
        console.log('got a promise representation', data);

        const errorHandler = err => {
          console.log(err);
          throw(err)
        }
        const completeHandler = longRRResponse => {
          console.log('**** response ****');
          console.log(JSON.stringify(longRRResponse, null, 2));
        }
        const progressHandler = (metadata, apiResponse) => {
          console.log('progress ', metadata);
        }
        operation.on('error', errorHandler)
        operation.on('complete', completeHandler)
        operation.on('progress', progressHandler)
      })
      .catch(err => {
        console.error('ERROR:', err);
        fs.unlink(name);
      });

this is the following response I got back

{
  "results": [
    {
      "alternatives": [
        {
          "words": [
            {
              "startTime": {
                "nanos": 100000000
              },
              "endTime": {
                "nanos": 700000000
              },
              "word": "your"
            },
            .
            .
            .
          ],
          "transcript": "your goal is to be the largest online and you are retailer in the world beyond that what's the goal for our mission is Earth's most customer-centric company and I know what that mean I'll give you an example",
          "confidence": 0.9520494341850281
        }
      ]
    },
}

I did not find any information about speakerTag, how to get back speakerTag information?

In google api-explorer for recognize there is option to select speakerTag in the fields section

screen shot 2018-05-24 at 10 20 17 am

Thanks!

grpc install fails: "node-pre-gyp: Permission denied" but for global install only?

Environment details

  • OS: Ubuntu 17.10 4.13.0-32-generic #35-Ubuntu SMP Thu Jan 25 09:13:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Node.js version: 8.9.4
  • npm version: 5.6.0
  • @google-cloud/speech version: 1.1.0

Steps to reproduce

  1. npm install -g @google-cloud/speech

Results below - this looks like the same issue as these:
grpc/grpc#13928
googleapis/gax-nodejs#173

Strangely, installing it locally (without -g) works, but with -g, I get the following:

> [email protected] install /root/.nvm/versions/node/v8.9.4/lib/node_modules/@google-cloud/speech/node_modules/grpc
> node-pre-gyp install --fallback-to-build --library=static_library

sh: 1: node-pre-gyp: Permission denied
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn
npm ERR! [email protected] install: `node-pre-gyp install --fallback-to-build --library=static_library`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-02-19T16_16_46_264Z-debug.log

2018-02-19T16_16_46_264Z-debug.log

Possibility to close stream manually

Hi folks,

I am using the streamingRecognize function to live stream audio. Is there any way to close the connection with the SDK? I've searched the docs and the code and couldn't find anything.

Cheers

I am having trouble installing the latest Google cloud client library for node.js

I am trying to install the Google cloud client speech library for node.js and having trouble to install it. I run: npm install --save @google-cloud/speech, it can't find https://iojs.org/download/release/v1.7.9/iojs-v1.7.9.tar.gz as shown in the screenshot below. If you go to this site (https://iojs.org/download/release/) and apparently there is no v1.7.9 can be found there. Even though I am showing the screenshot on a macos, but I tried the same thing on Windows, it failed the same way.

image

Environment details

  • OS: macos 10.13.2
  • Node.js version: 8.9.1
  • npm version: 6.2.0
  • @google-cloud/speech version: can't install, don't know.

Steps to reproduce

  1. run: npm install --save @google-cloud/speech
  2. can't install: installation fails when installing https://iojs.org/download/release/v1.7.9/iojs-v1.7.9.tar.gz which is not found.

Following these steps will guarantee the quickest resolution possible.

Thanks!

Speech API speechContexts not recognising arrays

Copied from original issue: googleapis/google-cloud-node#2813

@CharlotteGore
April 4, 2018 11:21 AM

Despite passing an array to speechContext.phrases, I am getting the following error:

ERROR: TypeError: .google.cloud.speech.v1.RecognitionConfig.speechContexts: array expected
    at Type.RecognitionConfig$fromObject [as fromObject] (eval at Codegen (/redacted/node_modules/@protobufjs/codegen/index.js:50:33), <anonymous>:55:9)
    at Type.fromObject (/redacted/node_modules/protobufjs/src/type.js:538:25)
    at Type.LongRunningRecognizeRequest$fromObject [as fromObject] (eval at Codegen (/redacted/node_modules/@protobufjs/codegen/index.js:50:33), <anonymous>:10:21)
    at Type.fromObject (/redacted/node_modules/protobufjs/src/type.js:538:25)
    at serialize (/redacted/node_modules/grpc/src/protobuf_js_6_common.js:70:23)
    at ServiceClient.Client.makeUnaryRequest (/redacted/node_modules/grpc/src/client.js:544:17)
    at apply (/redacted/node_modules/lodash/lodash.js:499:17)
    at ServiceClient.wrapper (/redacted/node_modules/lodash/lodash.js:5356:16)
    at /redacted/node_modules/@google-cloud/speech/src/v1/speech_client.js:175:39
    at timeoutFunc (/redacted/node_modules/google-gax/lib/api_callable.js:171:12)

Environment details

  • OS: OSX v10.11.6
  • Node.js version: v8.11.1
  • npm version: 5.6.0
  • google-cloud-node version: 196.0.0

Steps to reproduce

Code is here, it's mostly copied straight from the documentation. I have other code generating the real array but I tested it with the simplest possible array for testing and still getting the same result.

const config = {
  enableWordTimeOffsets: true,
  encoding: 'FLAC',
  sampleRateHertz: '16000',
  languageCode: 'en-GB',
  speechContexts: {
    phrases: ['dog', 'cat']
  }
};

const audio = {
  uri: 'gs://bucket/audio.flac'
};

const request = {
  config,
  audio,
};

client
  .longRunningRecognize(request)
  .then(data => {
    const operation = data[0];
    // Get a Promise representation of the final result of the job
    return operation.promise();
  })
  .then(data => {
    const response = data[0];
    console.log(`${JSON.stringify(response, null, 2)}`);
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

Support Quran rules

Hye, I would like to know if this package support to record user speak and reading quran, and check whether it's correct or no?

or check whether that user speak and check whether it's the same with text provided in arabic ?

thank you! :-)

Google speech api IsFinal Response is too slow

From @wassizafar786 on September 7, 2018 5:39

Hi This is me Wassi

I am facing a issue like i am using websocket to send stream to node server and receive result but google cloud speech api send me back isFinal result is very slow
Below is my client side code

  this.speechServerClient = new BinaryClient(environment.speechServerUrl)
            .on('error', this.onerror.bind(this))
            .on('open', () =>
            {
                // pass the sampleRate as a parameter to the server and get a reference to the communication stream.
                this.speechServerStream = this.speechServerClient.createStream({
                    type: 'speech',
                    sampleRate: this.audioContext.sampleRate
                });
            })
            .on('stream', (serverStream) =>
            {
                serverStream
                    .on('data', this.onresult.bind(this))
                    .on('error', this.onerror.bind(this))
                    .on('close', this.onerror.bind(this))
            });

and this is my server side code

var options = {
    config: {
        encoding: 'LINEAR16',
        languageCode: 'en-IN',
        sampleRateHertz: 16000,
    },
    singleUtterance: false,
    interimResults: true,
    verbose: true,
};
var speechClient = new Speech.SpeechClient({
    projectId: environment_1.environment.gCloudProjectId,
    keyFilename: 'myfile.json'
});


 var server = new binaryjs.BinaryServer({
        server: httpsServer,
    });
    server
        .on('error', function (error) { console.log('Server error:' + error); })
        .on('close', function () { console.log('Server closed'); })
        .on('connection', function (client) {
        client
            .on('error', function (error) { console.log('Client error: ' + error); })
            .on('close', function () {
            console.log('Client closed.');
        })
            .on('stream', function (clientStream, meta) {
            console.log('New Client: ' + JSON.stringify(meta));
            if (meta.type === 'speech') {
                handleSpeechRequest(client, clientStream, meta);
            }
            else {
                handleRandomUtteranceRequest(client);
            }
        });
    });
}
function handleSpeechRequest(client, clientStream, meta) {
    return __awaiter(this, void 0, void 0, function () {
        var speechStream;
        return __generator(this, function (_a) {
            switch (_a.label) {
                case 0:
                    options.config.sampleRateHertz = meta.sampleRate;
                    return [4 /*yield*/, speechClient.streamingRecognize(options)
                            .on('error', function (data) { handleGCSMessage(data, client, speechStream); })
                            .on('data', function (data) {
                            try {
                                handleGCSMessage(data, client, speechStream);
                                console.log("Transcription: " + data.results[0].alternatives[0].transcript);
                            }
                            catch (ex) {
                                console.log(ex);
                            }
                        })
                            .on('close', function () { client.close(); })];
                case 1:
                    speechStream = _a.sent();
                    clientStream.pipe(speechStream);
                    return [2 /*return*/];
            }
        });
    });
}

Please please tell me the solution

Copied from original issue: googleapis/google-cloud-node#2860

progressPercent is always 0

Copied from original issue: googleapis/google-cloud-node#2803

@Maqsim
February 24, 2018 9:57 AM

Environment details

  • OS: macOS 10.12.6
  • Node.js version: 8.9.3
  • npm version: 5.6.0
  • google-cloud-node version: 1.1.0

Steps to reproduce

  1. require @google-cloud/speech
  2. Set config:
const transcriptionRequestParams = {
    encoding: 'LINEAR16',
    profanityFilter: false,
    sampleRateHertz: 16000,
    enableWordTimeOffsets: true
  };
  1. Upload file into bucket and run longRunningRecognize:
return uploadToBucket(filePath)
    .then(bucketFile => launchAsyncRecognition(bucketFile, transcriptionRequestParams))
    .then(handleTranscriptions);

function launchAsyncRecognition(bucketFile, config) {
  const audio = { uri: googleBucketLink + '/' + bucketFile.name };
  const request = { config, audio };

  return speechClient.longRunningRecognize(request);
}
  1. Inside handleTranscriptions add .on for progress:
  const operation = data[0];

  operation.on('progress', function (metadata, apiResponse) {
    console.log(metadata);
  });
  1. Console log is:
{ progressPercent: 0,
  startTime:
   { seconds: Long { low: 1519465382, high: 0, unsigned: false },
     nanos: 856653000 },
  lastUpdateTime:
   { seconds: Long { low: 1519465383, high: 0, unsigned: false },
     nanos: 354594000 } }

Thanks in advance for any help.

Google Cloud Speech streaming API(sample) doesn't support proxy environment

Does the Google Speech Streaming API(gRPC) support a network with a proxy server?

We tried this out and didn't work in work in a proxy environment. It'll be really good to document whether this is something not supported or else add steps required to get it working with a proxy server.

https://stackoverflow.com/questions/51250372/error-14-unavailable-connect-failed-with-google-speech-api

We tried this as well, but it didn't work.
https://medium.com/google-cloud/accessing-google-cloud-apis-though-a-proxy-fe46658b5f2a

Environment details

  • OS: Windows 10
  • Node.js version: v8.11.3
  • npm version: 6.1.0
  • @google-cloud/speech version: 1.5.0

Using the API with Authentication

Browsing samples authentication appears to be ignored. So we get errors like the one below:

Error: Unexpected error while acquiring application default credentials: Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information.
    at GoogleAuth.<anonymous> (/Users/taf2/work/ctm-ai/node_modules/google-auth-library/build/src/auth/googleauth.js:235:31)
    at step (/Users/taf2/work/ctm-ai/node_modules/google-auth-library/build/src/auth/googleauth.js:47:23)
    at Object.next (/Users/taf2/work/ctm-ai/node_modules/google-auth-library/build/src/auth/googleauth.js:28:53)
    at fulfilled (/Users/taf2/work/ctm-ai/node_modules/google-auth-library/build/src/auth/googleauth.js:19:58)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:188:7)

It might be nice to include examples that show how to exactly pass the API keys into the code so people don't have to guess...


Before you begin
Select or create a Cloud Platform project.

Go to the projects page

Enable billing for your project.

Enable billing

Enable the Google Cloud Speech API API.

Enable the API

Set up authentication with a service account so you can access the API from your local workstation.

from the README - takes me to pages that don't explain how to use the node.js API to authenticate....

Contrast that with examples from similar competing projects:

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/TranscribeService.html
http://voicebase.readthedocs.io/en/v3/how-to-guides/hello-world.html#how-to-get-your-bearer-token
https://console.bluemix.net/docs/services/watson/getting-started-tokens.html#tokens-for-authentication

I just think a few one liners, making this very obvious would help more people use this API with less friction to getting started...

this page for example doesn't help:
https://cloud.google.com/docs/authentication/getting-started#auth-cloud-implicit-nodejs

// Imports the Google Cloud client library.
const Storage = require('@google-cloud/storage');

// Instantiates a client. If you don't specify credentials when constructing
// the client, the client library will look for credentials in the
// environment.
const storage = new Storage();

// Makes an authenticated API request.
storage
  .getBuckets()
  .then((results) => {
    const buckets = results[0];

    console.log('Buckets:');
    buckets.forEach((bucket) => {
      console.log(bucket.name);
    });
  })
  .catch((err) => {
    console.error('ERROR:', err);
  });```

I don't see any thing in there about where to place an API key?

npm install Error

I get an error when trying to npm install (tried the last two released versions of @google-cloud/speech):

verbose stack Error: ENOENT: no such file or directory, 
rename '/[...]/node_modules/@google-cloud/speech/node_modules/grpc/node_modules/abbrev' 
->
 '/[...]/node_modules/@google-cloud/speech/node_modules/grpc/node_modules/.abbrev.DELETE'

Other libs requiring grpc install and work fine.

enableWordTimeOffsets = true not working for streamingRecognize

enableWordTimeOffsets option yelds empty "words" array at least for streamingRecognize and al least for Russian.

Env:

  • OS: Linux ip-172-31-16-252 4.9.62-21.56.amzn1.x86_64 #1 SMP Thu Nov 16 05:37:08 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Node.js version: v7.2.1
  • npm version: 3.10.10
  • google-cloud-node version: @google-cloud/[email protected]

Code:

var recognizeStream = Speech.streamingRecognize(
    {
      config: {
        enableWordTimeOffsets: true,
        encoding: self.inputFormat.encoding,
        sampleRateHertz: self.inputFormat.sampleRate,
        languageCode: "ru-RU"
      },
      singleUtterance: false,
      interimResults: true
    }
  );

recognizeStream.on('data', (data)=>{...});

Delay in using alternative language codes

Environment details

  • OS: cenots 7
  • Node.js version: v6.11.3
  • npm version: 3.10.10
  • @google-cloud/speech version: ^2.0.0

Steps to reproduce

var streamingConfig = {
    config: {
        encoding: 'OGG_OPUS',
        sampleRateHertz: 16000,
        languageCode: 'ru-RU',
        profanityFilter: false,
        enableAutomaticPunctuation: true,
        maxAlternatives: 1,
        alternativeLanguageCodes: ['en-US']
    },
    singleUtterance: true,
    interimResults: true,
    verbose: true,
    timeout: 600
};

When I'm using the alternative language codes in streamingRecognize the partial result returning with a long delay. All partial results are returned almost together with the last (isFinal: true) message.

Add Enums for configuration of encodings

The speech api uses a few different constants in its config. For instance, a user needs to set an encoding. The python version of this does this and it makes encoding discoverability a bit easier. It also avoids magic strings in the code.

I think adding similar support to the nodejs-speech API would be an improvement.

Here is a link to the python code I reference: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/speech/google/cloud/speech_v1p1beta1/gapic/enums.py#L18

An in-range update of @google-cloud/nodejs-repo-tools is breaking the build 🚨

Version 2.2.1 of @google-cloud/nodejs-repo-tools was just published.

Branch Build failing 🚨
Dependency @google-cloud/nodejs-repo-tools
Current Version 2.2.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

@google-cloud/nodejs-repo-tools is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ci/circleci: node9 Your tests are queued behind your running builds Details
  • ci/circleci: node7 Your tests are queued behind your running builds Details
  • ci/circleci: node6 Your tests are queued behind your running builds Details
  • ci/circleci: node4 Your tests are queued behind your running builds Details
  • continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • ci/circleci: node8 Your tests failed on CircleCI Details

Commits

The new version differs by 8 commits.

  • 6834ab2 2.2.1
  • 13b006d Replace missing links (#103)
  • 78d1f39 fix(package): update got to version 8.2.0 (#105)
  • ae9e035 chore(package): update eslint-plugin-node to version 6.0.0 (#100)
  • b36b6ec fix(package): update lodash to version 4.17.5 (#99)
  • 3f47971 chore(package): update eslint to version 4.18.1 (#104)
  • 635dbc7 chore(package): update eslint-plugin-prettier to version 2.6.0 (#97)
  • 1d30f4c fix(package): update sinon to version 4.2.2 (#96)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Complete event does not fire

Environment details

  • OS: Windows 10
  • Node.js version: 8.8.1
  • npm version: 5.4.2
  • @google-cloud/speech version: 1.1.0

Steps to reproduce

               const result = await this.client.longRunningRecognize(options)
                const operation = result[0]
                const intialApiResponse = result[1]
                console.log('initial Response', intialApiResponse)
                console.log('transcription in progress')

                operation.on('complete', (result, metadata, finalApiResponse) => {
                    console.log('complete')
                })

                operation.on('progress', (metadata, apiResponse) => {
                    console.log('progress', metadata)
                })

I give it a 1-minute audio file as the input, it appears to start successfully, outputting
The code outputs two progress events, outputting:

progress LongRunningRecognizeMetadata {
  startTime:
   Timestamp {
     seconds: Long { low: 1519062704, high: 0, unsigned: false },
     nanos: 584955000 },
  lastUpdateTime:
   Timestamp {
     seconds: Long { low: 1519062704, high: 0, unsigned: false },
     nanos: 614602000 } }
progress LongRunningRecognizeMetadata {
  progressPercent: 100,
  startTime:
   Timestamp {
     seconds: Long { low: 1519062704, high: 0, unsigned: false },
     nanos: 584955000 },
  lastUpdateTime:
   Timestamp {
     seconds: Long { low: 1519062750, high: 0, unsigned: false },
     nanos: 762003000 } }

I would expect the complete event to be fired, since progress is at 100, but it isn't.

Feature request: Variables

It would be nice if you could specify variables in utterances the same way Amazon does it.

From the Alexa docs:

"You can include slot names in curly brackets {} as variables in sample utterances (e.g. "order me a {size} pizza")."

Haven't been able to find any info on this neither on google nor in any open/closed issue. Has this been discussed somewhere? (link?) Or perhaps this feature is already in the roadmap?

Either way, it would be interesting to hear where the discussions are at.

More info:
https://developer.amazon.com/docs/custom-skills/create-intents-utterances-and-slots.html#identify-slots

Google speech API response timeout

Hi,

For certain audios, google speech API doesn't give a proper response. The request is getting timed out.
To reproduce the issue you can pass an empty audio. Is there any workaround for it?

Thanks

Transcription not printed

Environment details

  • OS: windows 7
  • Node.js version: v9.2.1
  • npm version: 5.5.1
  • @google-cloud/speech version: Latest

Steps to reproduce

  1. executed node recognize listen
  2. D:\speech\sample\nodejs-speech\samples>node recognize listen
    Recording with sample rate 16000...
    Listening, press Ctrl+C to stop.
    Recording 4163 bytes
    Recording 1385 bytes
    End Recording: 64.330ms

Got the above output. Expecting the Transcription.

progressPercent is always 0

Copied from original issue: googleapis/google-cloud-node#2803

@Maqsim
February 24, 2018 9:57 AM

Environment details

  • OS: macOS 10.12.6
  • Node.js version: 8.9.3
  • npm version: 5.6.0
  • google-cloud-node version: 1.1.0

Steps to reproduce

  1. require google-cloud
  2. Set config:
const transcriptionRequestParams = {
    encoding: 'LINEAR16',
    profanityFilter: false,
    sampleRateHertz: 16000,
    enableWordTimeOffsets: true
  };
  1. Upload file into bucket and run longRunningRecognize:
return uploadToBucket(filePath)
    .then(bucketFile => launchAsyncRecognition(bucketFile, transcriptionRequestParams))
    .then(handleTranscriptions);

function launchAsyncRecognition(bucketFile, config) {
  const audio = { uri: googleBucketLink + '/' + bucketFile.name };
  const request = { config, audio };

  return speechClient.longRunningRecognize(request);
}
  1. Inside handleTranscriptions add .on for progress:
  const operation = data[0];

  operation.on('progress', function (metadata, apiResponse) {
    console.log(metadata);
  });
  1. Console log is:
{ progressPercent: 0,
  startTime:
   { seconds: Long { low: 1519465382, high: 0, unsigned: false },
     nanos: 856653000 },
  lastUpdateTime:
   { seconds: Long { low: 1519465383, high: 0, unsigned: false },
     nanos: 354594000 } }

Thanks in advance for any help.

Poor performance on streamingRecognize

Environment details

  • OS: (docker: google/cloud-sdk:latest) Debian based
  • Node.js version: v8.9.4
  • npm version: 5.6.0
  • @google-cloud/speech version: @google-cloud/[email protected]

###########
The issue:

I had google speech working relatively fine 3-6 months ago. I haven't used it for a while, as I build it for a demonstration only.

Few days ago, I noticed it stopped working, so I did the usual upgrading of libraries, docker image, npm, etc. Now even if it works, it takes about between 40 seconds and 1 minute to return any text.

############

My architecture is as following

  • Client: Browser, using socket io I stream to our server.

          stream1 = ss.createStream();
    
          navigator.getUserMedia({
              audio: {
                  mandatory: {
                      googEchoCancellation: "false",
                      googAutoGainControl: "false",
                      googNoiseSuppression: "false",
                      googHighpassFilter: "false"
                  },
                  optional: []
              }
          },
          function(o)
          {
              audioContext = new AudioContext,
              streamSource = audioContext.createMediaStreamSource(o),
              chunckSize = 4096,
              scriptProcessor = audioContext.createScriptProcessor(chunckSize, 1, 0);
    
              ss(socket)
                  .emit(
                      'client-stream-request',
                      stream1,
                      {
                          sampleRate: audioContext.sampleRate,
                          userId: that.config.userId,
                          languageCode: that.config.languageCode
                      }
                  );
    
              scriptProcessor.onaudioprocess = function(o)
              {
              var input = o.inputBuffer.getChannelData(0);
                  stream1.write( new ss.Buffer( self.convertFloat32ToInt16(input) ) );
              };
              
              scriptProcessor.connect(audioContext.destination);
              streamSource.connect(scriptProcessor);
    
          }, function(n) {
              that.ui.setState('notrecording');
          })
    
    
    
      this.convertFloat32ToInt16 = function(buffer)
      {
          l = buffer.length;
          buf = new Int16Array(l);
    
          while (l--)
          {
             buf[l] = Math.min(1, buffer[l])*0x7FFF;
          }
    
          return buf.buffer;
      }
    

`

  • Server: Node.js in docker using nginx as reverse proxy

              var speech = require('(at)google-cloud/speech'),
                    speechClient = new speech.v1.SpeechClient({
                          projectId: config.google.auth.projectId
                    });
    
              request = {
                      config: {
                          encoding:'LINEAR16',
                          sampleRateHertz: sampleRate, //this comes from the browser
                          languageCode: languageCode
                      },
                      singleUtterance: false,
                      interimResults: false 
                  };
    
                      speechClient.streamingRecognize(request)
                              .on('error', function (e) {
                                  console.log(e);
                              })
                  .on('data', function (data)
                  {
                      socket.emit('recognized', text);
                  });
    

The error in the console, are because I am testing and everytime I leave it expire.

selection_135

Any ideas ?

How do I force streamingRecognize to error when network is unavailable?

I'm struggling to get streamingRecognize throw an error if the network is not available.

Right now it seems like it is just waiting the full "deadline" which appears to be 1000 seconds, until it throws the DEADLINE_EXCEEDED error.

I imagine there could be an option to shorten the "deadline", but this would not be a full solution because I would like to get the UNAVAILABLE error (or the expected no network error), so it can be handled appropriately.

My implementation of streamingRecognize looks like this.

    // this code lives in a class;
    this.speechClient = new speech.v1p1beta1.SpeechClient({keyFilename: path.join(__dirname, 'keyfile.json')});

    const AUDIO_CONFIG = {
      encoding: 'LINEAR16',
      sampleRateHertz: 16000,
      languageCode: 'en-US',
    };

    let request = {
      config: AUDIO_CONFIG,
      interimResults: true,
    };

    this.recognizeStream = this.speechClient
      .streamingRecognize(request)
      .on('error', (err) => {
        // not seeing the UNAVAILABLE error
        this.logger.error(`recognize error`, err);
      })
      .on('data', (data) => {
        // do something with the data
      })

    inputStream.pipe(this.recognizeStream);

Environment details

  • OS:
  • Node.js version: v6.9.5
  • npm version: 3.10.10
  • @google-cloud/speech version: 1.4.0

Steps to reproduce

  1. disable internet connection (e.g. disable Wi-Fi)
  2. invoke a previously working implementation of streamingRecognize
  3. observe results (no UNAVAILABLE error; DEADLINE_EXCEEDED error after 1000 seconds)

Api to turn text into mp3?

What api should I use to transform text into mp3, I'm using firebase .. I'm already transforming images into text, now I want to turn text into audio:

// Copyright 2018 Google Inc.
//
//  Licensed under the Apache License, Version 2.0 (the "License");
//  you may not use this file except in compliance with the License.
//  You may obtain a copy of the License at
//
//      http://www.apache.org/licenses/LICENSE-2.0
//
//  Unless required by applicable law or agreed to in writing, software
//  distributed under the License is distributed on an "AS IS" BASIS,
//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//  See the License for the specific language governing permissions and
//  limitations under the License.

// Firebase
const admin = require('firebase-admin');
//admin.initializeApp(functions.config().firebase);

// Cloud Vision
const vision = require('@google-cloud/vision');
const visionClient =  new vision.ImageAnnotatorClient();
const bucketName = 'tesla-369.appspot.com';

//transleter
const functions = require('firebase-functions');
const Speech = require('@google-cloud/speech');
const speech = Speech({keyFilename: "service-account-credentials.json"});
const Translate = require('@google-cloud/translate');
const translate = Translate({keyFilename: "service-account-credentials.json"});
const Encoding = Speech.v1.types.RecognitionConfig.AudioEncoding;
const Firestore = require('@google-cloud/firestore');
const getLanguageWithoutLocale = require("./utils").getLanguageWithoutLocale;

const db = new Firestore();

/*
functions.storage.object() to detect object changes in the default storage range.
functions.storage.bucket('bucketName').object() to detect object changes at a specific interval.
const object = event.data; // The Storage object.
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const resourceState = object.resourceState; // The resourceState is 'exists' or 'not_exists' (for file/folder deletions).
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
*/

exports.tesla369OCRImagem = functions.storage.bucket(bucketName).object().onChange( async event => {

    if (event.data.resourceState == 'not_exists') return false;


    const object = event.data;
    const filePath = object.name;   

    const imageUri = `gs://${bucketName}/${filePath}`;

    const docId = filePath.split('.jpg')[0];

    const docRef  = admin.firestore().collection('photos').doc(docId);
    //se entrar entrar png não faz nada
    if (filePath.endsWith('.png')) return false;
    //se entrar pdf não faz nada
    if (!filePath.endsWith('.pdf')) return false;

    // Text Extraction
    const textRequest = await visionClient.documentTextDetection(imageUri)
    const fullText = textRequest[0].textAnnotations[0]
    const text =  fullText ? fullText.description : null

    // Web Detection
    const webRequest = await visionClient.webDetection(imageUri)
    const web = webRequest[0].webDetection

    // Faces    
    const facesRequest = await visionClient.faceDetection(imageUri)
    const faces = facesRequest[0].faceAnnotations

    // Landmarks
    const landmarksRequest = await visionClient.landmarkDetection(imageUri)
    const landmarks = landmarksRequest[0].landmarkAnnotations
    
    // Save to Firestore
    const data = { text, web, faces, landmarks }
    return docRef.set(data)

});

// Listen for any change (onCreate, onUpdate , onDelete) on do cument `uploadId` in collection `uploads`
//Wildcard {uploadId}
exports.onUploadFS = functions.firestore.document("/uploads/{uploadId}").onWrite((event) => {
        let data = event.data.data();
        let language = data.language ? data.language : "en";
        let sampleRate = data.sampleRate ? parseInt(data.sampleRate, 10) : 16000;
        let encoding = data.encoding == "FLAC" ? Encoding.FLAC : Encoding.LINEAR16;

        const request = {
            config: {
                languageCode,
                sampleRateHertz,
                encoding
            },
            //audio: { uri : `gs://${process.env.GCP_PROJECT}.appspot.com/${data.fullPath}` }
            audio: { uri : `gs://tesla-369.appspot.com/${data.fullPath}` }
        };

        return speech.recognize(request).then((response) => {
            let transcript = response[0].results[0].alternatives[0].transcript;
            return db.collection("transcripts").doc(event.params.uploadId).set({text: transcript, language: language});
        });
    });

exports.onTranscriptFS = functions.firestore
    .document("/transcripts/{transcriptId}")
    .onWrite((event) => {
        let value = event.data.data();
        let transcriptId = event.params.transcriptId;
        let text = value.text ? value.text : value;

        const languages = ["en", "es", "pt", "de", "ja", "hi", "nl", "fr", "pl"];

        const from = value.language ? getLanguageWithoutLocale(value.language) : "en";

          let promises = languages.map(to => {
            if (from == to) {
                return db.collection("translations").doc(transcriptId).set({to: {text: text, language: from}}, {merge: true});
            } else {
                // Call the Google Cloud Platform Translate API
                return translate.translate(text, {
                    from,
                    to
                }).then(result => {
                    // Write the translation to the database
                    let translation = result[0];
                    return db.collection("translations").doc(transcriptId).set({to: {text: translation, language: to}}, {merge: true});
                });
            }
        });
        return Promise.all(promises).then(() => {
            return db.collection("translations").doc(transcriptId).set(doc);
        });
    });

How to stop streaming recognize?

I am sending data from frontend to backend using an AudioContent through a websocket. I use stream recognizing and I want to stop streaming when one result has isFinal true. I couldn't stop it therefore server will be down because of timeout.

OS: Ubuntu 18.04 lts
Node.js version: v10.7.0
npm version: 6.1.0
@google-cloud/speech version: 2.0.0

How can I stop stream to google speech?
Thank you!

Authentication issue

Can you guys tell me how to Set up authentication with a service account so you can access the API from your local workstation. i am using dynamic service accounts..

I followed this link ---> https://cloud.google.com/docs/authentication/getting-started
Here i am getting list of my project names..

When i am trying to detect my intent it say permission denied.. error code 7

I already enabled dialog flow api in my google library and added permissions to service account..

Can any one share source code for authentication..

Thanks!

npm install @google-cloud/speech fails

When installing via NPM or Yarn I'm getting:

WARN notice [SECURITY] protobufjs has the following vulnerability: 1 moderate. Go here 
for more details: https://nodesecurity.io/advisories?search=protobufjs&version=5.0.3 - 
Run `npm i npm@latest -g` to upgrade your npm version, and then `npm audit` to get more info.
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/merge2-be1dfff2/package.json'
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/merge2-be1dfff2/index.js'
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/merge2-be1dfff2/index.mjs'
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/merge2-be1dfff2/LICENSE'
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/merge2-be1dfff2/README.md'
WARN tar ENOENT: no such file or directory, open '/usr/local/lib/node_modules/.staging/grpc-167d0fd9/deps/grpc/src/boringssl/err_data.c'
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/@google-cloud/speech/node_modules/ecc-jsbn):
npm WARN enoent SKIPPING OPTIONAL DEPENDENCY: ENOENT: Cannot cd into '/usr/local/lib/node_modules/.staging/ecc-jsbn-13c71c3d'

npm ERR! code E404
npm ERR! 404 Not Found: protobufjs@https://registry.npmjs.org/protobufjs/-/protobufjs-6.8.6.tgz`

I'm able to replicate this on multiple machines and environments.

Here's a sample replications of the issue - https://codesandbox.io/s/9lwjzolw3w

screen shot 2018-05-31 at 2 27 32 pm

Google RPC add-on is building into an incorrectly named folder for Electron

This has to do with electron-speech-recognition. The build ultimately works properly, but the build binary for @google-cloud/speech folder must be renamed as shown here:
https://github.com/mattcollier/electron-speech-recognition/blob/master/rename-grpc-binary.sh

I don't know if that's a problem with the grpc node-gyp configuration, an issue with electron-rebuild or what. Can anyone point me in the right direction?

Environment details

  • OS: Debian 9
  • Node.js version: Electron 1.6 and 1.7
  • npm version: 5.5.1
  • @google-cloud/speech version: 1.0.0

See also: electron/rebuild#217

Fix failing samples test

Samples tests have been failing for a while now - let's fix them!

  ✔ MicrophoneStream › MicrophoneStream.js Should load and display Yaaaarghs(!) correctly (189ms)
  ✔ betaFeatures › should run word Level Confience on a local file (2s)
  ✖ betaFeatures › should run word level confidence on a GCS bucket 
  ✔ betaFeatures › should transcribe multi-language on a local file (3.4s)
  ✔ betaFeatures › should transcribe multi-language on a GCS bucket (3.8s)
  ✔ betaFeatures › should run multi channel transcription on GCS file (6.6s)
  ✔ betaFeatures › should run speech diarization on a GCS file (7.9s)
  ✔ betaFeatures › should run multi channel transcription on a local file (8.5s)
  ✔ betaFeatures › should run speech diarization on a local file (9.2s)
  ✔ recognize.v1p1beta1 › should run sync recognize with enhanced model (9s)
  ✔ recognize.v1p1beta1 › should run sync recognize with metadata (9.4s)
  ✔ recognize.v1p1beta1 › should run sync recognize with auto punctuation (9.5s)
  ✖ recognize › should run streaming recognize Rejected promise returned by test
  ✔ recognize › should run sync recognize (1.9s)
  ✔ recognize › should run sync recognize with word time offset (1.9s)
  ✔ recognize › should run sync recognize on a GCS file (2s)
  ✔ recognize › should run async recognize on a GCS file with word time offset (2.3s)
  ✔ recognize › should run async recognize on a local file (2.5s)
  ✔ recognize › should run async recognize on a GCS file (2.5s)
  ✔ quickstart › should run quickstart (1.7s)
  ✔ recognize.v1p1beta1 › should run sync recognize with model selection (16s)
  ✔ recognize.v1p1beta1 › should run sync recognize on a GCS file with model selection (16.6s)

  2 tests failed

  betaFeatures › should run word level confidence on a GCS bucket

  /home/node/samples/samples/system-test/betaFeatures.test.js:106

   105:   );                                                                   
   106:   t.true(                                                              
   107:     output.includes(`Transcription: how old is the Brooklyn Bridge`) &&

  Value is not `true`:

  false

  output.includes(`Transcription: how old is the Brooklyn Bridge`) && output.includes(`Confidence: \d\.\d`)
  => false

  output.includes(`Confidence: \d\.\d`)
  => false

  `Confidence: \d\.\d`
  => 'Confidence: d.d'

  output
  => `Transcription: how old is the Brooklyn Bridge ␊
   Confidence: 0.9836039543151855␊
  Word-Level-Confidence:␊
   word: how, confidence: 0.9876290559768677␊
   word: old, confidence: 0.9692915678024292␊
   word: is, confidence: 0.982710063457489␊
   word: the, confidence: 0.982710063457489␊
   word: Brooklyn, confidence: 0.9876290559768677␊
   word: Bridge, confidence: 0.9876290559768677`

  output.includes(`Transcription: how old is the Brooklyn Bridge`)
  => true

  `Transcription: how old is the Brooklyn Bridge`
  => 'Transcription: how old is the Brooklyn Bridge'

  output
  => `Transcription: how old is the Brooklyn Bridge ␊
   Confidence: 0.9836039543151855␊
  Word-Level-Confidence:␊
   word: how, confidence: 0.9876290559768677␊
   word: old, confidence: 0.9692915678024292␊
   word: is, confidence: 0.982710063457489␊
   word: the, confidence: 0.982710063457489␊
   word: Brooklyn, confidence: 0.9876290559768677␊
   word: Bridge, confidence: 0.9876290559768677`



  recognize › should run streaming recognize

  /home/node/samples/samples/recognize.js:392

   391:       console.log(                                                    
   392:         `Transcription: ${data.results[0].alternatives[0].transcript}`
   393:       );                                                              

  Rejected promise returned by test. Reason:

  Error {
    cmd: 'node recognize.js stream /home/node/samples/samples/resources/audio.raw',
    code: 1,
    killed: false,
    signal: null,
    message: `Command failed: node recognize.js stream /home/node/samples/samples/resources/audio.raw␊
    (node:431) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead␊
    /home/node/samples/samples/recognize.js:392␊
            `Transcription: ${data.results[0].alternatives[0].transcript}`␊
                                              ^␊
    ␊
    TypeError: Cannot read property 'alternatives' of undefined␊
        at Pumpify.client.streamingRecognize.on.on.data (/home/node/samples/samples/recognize.js:392:43)␊
        at emitOne (events.js:116:13)␊
        at Pumpify.emit (events.js:211:7)␊
        at addChunk (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:291:12)␊
        at readableAddChunk (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:278:11)␊
        at Pumpify.Readable.push (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:245:10)␊
        at Pumpify.Duplexify._forward (/home/node/samples/node_modules/duplexify/index.js:170:26)␊
        at DestroyableTransform.onreadable (/home/node/samples/node_modules/duplexify/index.js:134:10)␊
        at emitNone (events.js:106:13)␊
        at DestroyableTransform.emit (events.js:208:7)␊
    `,
  }

  Pumpify.client.streamingRecognize.on.on.data (recognize.js:392:43)
  addChunk (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:291:12)
  readableAddChunk (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:278:11)
  Pumpify.Readable.push (/home/node/samples/node_modules/readable-stream/lib/_stream_readable.js:245:10)
  Pumpify.Duplexify._forward (/home/node/samples/node_modules/duplexify/index.js:170:26)
  DestroyableTransform.onreadable (/home/node/samples/node_modules/duplexify/index.js:134:10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.