Coder Social home page Coder Social logo

spokestack / spokestack-android Goto Github PK

View Code? Open in Web Editor NEW
66.0 6.0 5.0 1.28 MB

Extensible Android mobile voice framework: wakeword, ASR, NLU, and TTS. Easily add voice to any Android app!

Home Page: https://spokestack.io

License: Apache License 2.0

Java 98.11% Makefile 0.38% C++ 1.51%
speech-recognition android voice-assistant wakeword asr voice-activity-detection text-to-speech nlu vad speech voice voice-recognition voice-as-an-interface voice-synthesis natural-language-understanding wakeword-activation speech-synthesis speech-api tts

spokestack-android's Introduction

Spokestack Android

CircleCI Coverage Maven Central Javadocs License

Spokestack is an all-in-one solution for mobile voice interfaces on Android. It provides every piece of the speech processing puzzle, including voice activity detection, wakeword detection, speech recognition, natural language understanding (NLU), and speech synthesis (TTS). Under its default configuration (on newer Android devices), everything except TTS happens directly on the mobile device—no communication with the cloud means faster results and better privacy.

And Android isn't the only platform it supports!

Creating a free account at spokestack.io lets you train your own NLU models and test out TTS without adding code to your app. We can even train a custom wakeword and TTS voice for you, ensuring that your app's voice is unique and memorable.

For a brief introduction, read on, but for more detailed guides, see the following:

Installation


Note: Spokestack used to be hosted on JCenter, but since the announcement of its discontinuation, we've moved distribution to Maven Central. Please ensure that your root-level build.gradle file includes mavenCentral() in its repositories block in order to access versions >= 11.0.2.


A Note on API Level

The minimum Android SDK version listed in Spokestack's manifest is 8 because that's all you should need to run wake word detection and speech recognition. To use other features, it's best to target at least API level 21.

If you include ExoPlayer for TTS playback (see below), you might have trouble running on versions of Android older than API level 24. If you run into this problem, try adding the following line to your gradle.properties file:

android.enableDexingArtifactTransform=false

Dependencies

Add the following to your app's build.gradle:

android {

  // ...

  compileOptions {
    sourceCompatibility JavaVersion.VERSION_1_8
    targetCompatibility JavaVersion.VERSION_1_8
  }
}

// ...

dependencies {
  // ...

  // make sure to check the badge above or "releases" on the right for the
  // latest version!
  implementation 'io.spokestack:spokestack-android:11.5.2'

  // for TensorFlow Lite-powered wakeword detection and/or NLU, add this one too
  implementation 'org.tensorflow:tensorflow-lite:2.6.0'

  // for automatic playback of TTS audio
  implementation 'androidx.media:media:1.3.0'
  implementation 'com.google.android.exoplayer:exoplayer-core:2.14.0'

  // if you plan to use Google ASR, include these
  implementation 'com.google.cloud:google-cloud-speech:1.22.2'
  implementation 'io.grpc:grpc-okhttp:1.28.0'

  // if you plan to use Azure Speech Service, include these, and
  // note that you'll also need to add the following to your top-level
  // build.gradle's `repositories` block:
  // maven { url 'https://csspeechstorage.blob.core.windows.net/maven/' }
  implementation 'com.microsoft.cognitiveservices.speech:client-sdk:1.9.0'

}

Usage

See the quickstart guide for more information, but here's the 30-second version of setup:

  1. You'll need to request the RECORD_AUDIO permission at runtime. See our skeleton project for an example of this. The INTERNET permission is also required but is included by the library's manifest by default.
  2. Add the following code somewhere, probably in an Activity if you're just starting out:
private lateinit var spokestack: Spokestack

// ...
spokestack = Spokestack.Builder()
    .setProperty("wake-detect-path", "$cacheDir/detect.tflite")
    .setProperty("wake-encode-path", "$cacheDir/encode.tflite")
    .setProperty("wake-filter-path", "$cacheDir/filter.tflite")
    .setProperty("nlu-model-path", "$cacheDir/nlu.tflite")
    .setProperty("nlu-metadata-path", "$cacheDir/metadata.json")
    .setProperty("wordpiece-vocab-path", "$cacheDir/vocab.txt")
    .setProperty("spokestack-id", "your-client-id")
    .setProperty("spokestack-secret", "your-secret-key")
    // `applicationContext` is available inside all `Activity`s
    .withAndroidContext(applicationContext)
    // see below; `listener` here inherits from `SpokestackAdapter`
    .addListener(listener)
    .build()

// ...

// starting the pipeline makes Spokestack listen for the wakeword
spokestack.start()

This example assumes you're storing wakeword and NLU models in your app's cache directory; again, see the skeleton project for an example of decompressing these files from the assets bundle into this directory.

To use the demo "Spokestack" wakeword, download the TensorFlow Lite models: detect | encode | filter

If you don't want to bother with that yet, just disable wakeword detection and NLU, and you can leave out all the file paths above:

spokestack = Spokestack.Builder()
    .withoutWakeword()
    .withoutNlu()
    // ...
    .build()

In this case, you'll still need to start() Spokestack as above, but you'll also want to create a button somewhere that calls spokestack.activate() when pressed; this starts ASR, which transcribes user speech.

Alternately, you can set Spokestack to start ASR any time it detects speech by using a non-default speech pipeline profile as described in the speech pipeline documentation. In this case you'd want the VADTriggerAndroidASR profile:

// replace
.withoutWakeword()
// with
.withPipelineProfile("io.spokestack.spokestack.profile.VADTriggerAndroidASR")

Note also the addListener() line during setup. Speech processing happens continuously on a background thread, so your app needs a way to find out when the user has spoken to it. Important events are delivered via events to a subclass of SpokestackAdapter. Your subclass can override as many of the following event methods as you like. Choosing to not implement one won't break anything; you just won't receive those events.

  • speechEvent(SpeechContext.Event, SpeechContext): This communicates events from the speech pipeline, including everything from notifications that ASR has been activated/deactivated to partial and complete transcripts of user speech.
  • nluResult(NLUResult): When the NLU is enabled, user speech is automatically sent through NLU for classification. You'll want the results of that classification to help your app decide what to do next.
  • ttsEvent(TTSEvent): If you're managing TTS playback yourself, you'll want to know when speech you've synthesized is ready to play (the AUDIO_AVAILABLE event); even if you're not, the PLAYBACK_COMPLETE event may be helpful if you want to automatically reactivate the microphone after your app reads a response.
  • trace(SpokestackModule, String): This combines log/trace messages from every Spokestack module. Some modules include trace events in their own event methods, but each of those events is also sent here.
  • error(SpokestackModule, Throwable): This combines errors from every Spokestack module. Some modules include error events in their own event methods, but each of those events is also sent here.

The quickstart guide contains sample implementations of most of these methods.

As we mentioned, classification is handled automatically if NLU is enabled, so the main methods you need to know about while Spokestack is running are:

  • start()/stop(): Starts/stops the pipeline. While running, Spokestack uses the microphone to listen for your app's wakeword unless wakeword is disabled, in which case ASR must be activated another way. The pipeline should be stopped when Spokestack is no longer needed (or when the app is suspended) to free resources.
  • activate()/deactivate(): Activates/deactivates ASR, which listens to and transcribes what the user says.
  • synthesize(SynthesisRequest): Sends text to Spokestack's cloud TTS service to be synthesized as audio. Under the default configuration, this audio will be played automatically when available.

Development

Maven is used for building/deployment, and the package is hosted at Maven Central.

This package requires the Android NDK to be installed and the ANDROID_HOME and ANDROID_NDK_HOME variables to be set. On OSX, ANDROID_HOME is usually set to ~/Library/Android/sdk and ANDROID_NDK_HOME is usually set to ~/Library/Android/sdk/ndk/<version>.

ANDROID_NDK_HOME can also be specified in your local Maven settings.xml file as the android.ndk.path property.

Testing/Coverage

mvn test jacoco:report

Lint

mvn checkstyle:check

Release

Ensure that your Sonatype/Maven Central credentials are in your user settings.xml (usually ~/.m2/settings.xml):

<servers>
    <server>
        <id>ossrh</id>
        <username>sonatype-username</username>
        <password>sonatype-password</password>
    </server>
</servers>

On a non-master branch, run the following command. This will prompt you to enter a version number and tag for the new version, push the tag to GitHub, and deploy the package to the Sonatype repository.

mvn release:clean release:prepare release:perform

The Maven goal may fail due to a bug where it tries to upload the files twice, but the release has still happened.

Complete the process by creating and merging a pull request for the new branch on GitHub and updating the release notes by editing the tag.

For additional information about releasing see http://maven.apache.org/maven-release/maven-release-plugin/

License

Copyright 2021 Spokestack, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

spokestack-android's People

Contributors

brentspell avatar dependabot[bot] avatar noelweichbrodt avatar roppem9 avatar space-pope avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

spokestack-android's Issues

Network error while using VADTriggerAndroidASR Profile

Hi I am trying to implement the following profile VADTriggerAndroidASR - which seems to give NETWORK_ERROR always after activation. Please find the log below.

Can you please suggest a solution for this?
Some preliminary google search gave the following result.

This might happen due to having an overlapping MediaRecorder or AudioRecord instance active at the same time (link)

{ isActive: true,
      error: 'io.spokestack.spokestack.android.SpeechRecognizerError: SpeechRecognizer error code 2: NETWORK_ERROR\n\tat AndroidSpeechRecognizer$SpokestackListener.onError(AndroidSpeechRecognizer.java:143)\n\tat android.speech.SpeechRecognizer$InternalListener$1.handleMessage(SpeechRecognizer.java:450)\n\tat android.os.Handler.dispatchMessage(Handler.java:106)\n\tat android.os.Looper.loop(Looper.java:216)\n\tat android.app.ActivityThread.main(ActivityThread.java:7266)\n\tat java.lang.reflect.Method.invoke(Native Method)\n\tat com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:494)\n\tat com.android.internal.os.ZygoteInit.main(ZygoteInit.java:975)\n',
      message: null,
      transcript: '',
      event: 'ERROR' }

Crash in WordpieceTextEncoder

I tried to search for min sdk version and in the manifest it looks like api 8, however in the class
WordpieceTextEncoder there is a call using

return this.vocabulary.getOrDefault(token,this.vocabulary.get(UNKNOWN));

in encodeSingle (line 90) from WordpieceTextEncoder I had a crash because of getOrDefault, this is supported only from api 24+
would be nice to use something like

return this.vocabulary[token] ?:  this.vocabulary.get(UNKNOWN));

version 11.4.1

(tha's kotlin)

Add proguard rules to keep spokestack even when used dynamically

When a project is minified using proguard, Spokestack classes can get removed unless they are loaded and used up front. Some apps may not want to initialize Spokestack until later (e.g. after authentication). I think there's a way to add proguard rules to the project to keep spokestack through minification. For instance, with -keep class com.pylon.spokestack.** { *; }.

TFWakeWordAzureASR Profile

Hello,

Your docs indicate TFWakewordAzureASR to be a valid pipeline profile.

java.lang.IllegalArgumentException: TFWakewordAzureASR pipeline profile is invalid!

What is the correct way to call upon the profile?

Wakeword-only profile

For some use cases, it's helpful to have wakeword detection without ASR. This configuration is fully supported by Spokestack, but it would be convenient to have a premade pipeline profile that omits ASR to simplify setup.

Implementing this is as simple as copying the TFWakewordGoogleASR profile and omitting the ASR stage. It would probably also be helpful to configure low values for both wake-active-min and wake-active-max (used by the ActivationTimeout stage) since the pipeline shouldn't stay active for long, lest it miss a subsequent wakeword utterance.

Alternatively, the PreASRMicrophoneInput stage could be used as an input stage in conjunction with longer activation timeouts to allow the application to control the microphone while the pipeline is active.

A complete implementation will also add the new profile to the profile test for a sanity check.

Missing three trained TensorFlow Lite models for android

Hi, Thank you for the voice pipelines. I couldn't find the three models that you mentioned over github.

The wakeword trigger uses three trained TensorFlow Lite models: a filter model for spectrum preprocessing, an autoregressive encoder encode model, and a detect decoder model for keyword classification

Can you please guide where to download?

Thanks

training tflite model

Hi,
Thanks for the pipeline. Any plans to release the code to train tflite models?

Error when building Google Cloud ASR pipeline

Hi 👋

I'm trying to set up the Google Cloud ASR with this configuration:

var json: String? = null
        try {
            val  inputStream: InputStream = assets.open("service_account.json")
            json = inputStream.bufferedReader().use{it.readText()}
        } catch (ex: Exception) {
            ex.printStackTrace()
        }

        val builder = Spokestack.Builder()
            .withoutWakeword()
            .withoutNlu()
            .setProperty("spokestack-id", "my id")
            .setProperty("spokestack-secret", "my secret")
            .withAndroidContext(this)
            .addListener(listener)
        builder
            .pipelineBuilder
            .setProperty("google-credentials", json)
            .setProperty("language", "en-US")
            .useProfile("io.spokestack.spokestack.profile.VADTriggerGoogleASR")
        return builder.build()

Unfortunately, this configuration throws the following exception(s):

E/AndroidRuntime: FATAL EXCEPTION: main
    Process: mypackagename, PID: 26259
    java.lang.RuntimeException: Unable to start activity ComponentInfo{mypackagename.MainActivity}: java.lang.reflect.InvocationTargetException
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3448)
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3595)
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:83)
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2147)
        at android.os.Handler.dispatchMessage(Handler.java:107)
        at android.os.Looper.loop(Looper.java:237)
        at android.app.ActivityThread.main(ActivityThread.java:7814)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1075)
     Caused by: java.lang.reflect.InvocationTargetException
        at java.lang.reflect.Constructor.newInstance0(Native Method)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:343)
        at io.spokestack.spokestack.SpeechPipeline.createComponents(SpeechPipeline.java:203)
        at io.spokestack.spokestack.SpeechPipeline.start(SpeechPipeline.java:182)
        at io.spokestack.spokestack.Spokestack.start(Spokestack.java:182)
        at mypackagename.MainActivity.onCreate(MainActivity.kt:54)
        at android.app.Activity.performCreate(Activity.java:7955)
        at android.app.Activity.performCreate(Activity.java:7944)
        at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1307)
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3423)
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3595) 
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:83) 
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135) 
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95) 
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2147) 
        at android.os.Handler.dispatchMessage(Handler.java:107) 
        at android.os.Looper.loop(Looper.java:237) 
        at android.app.ActivityThread.main(ActivityThread.java:7814) 
        at java.lang.reflect.Method.invoke(Native Method) 
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493) 
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1075) 
     Caused by: java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/auth/oauth2/ServiceAccountCredentials;
        at io.spokestack.spokestack.google.GoogleSpeechRecognizer.<init>(GoogleSpeechRecognizer.java:66)
        at java.lang.reflect.Constructor.newInstance0(Native Method) 
        at java.lang.reflect.Constructor.newInstance(Constructor.java:343) 
        at io.spokestack.spokestack.SpeechPipeline.createComponents(SpeechPipeline.java:203) 
        at io.spokestack.spokestack.SpeechPipeline.start(SpeechPipeline.java:182) 
        at io.spokestack.spokestack.Spokestack.start(Spokestack.java:182) 
        at mypackagename.MainActivity.onCreate(MainActivity.kt:54) 
        at android.app.Activity.performCreate(Activity.java:7955) 
        at android.app.Activity.performCreate(Activity.java:7944) 
        at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1307) 
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3423) 
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3595) 
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:83) 
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135) 
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95) 
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2147) 
        at android.os.Handler.dispatchMessage(Handler.java:107) 
        at android.os.Looper.loop(Looper.java:237) 
        at android.app.ActivityThread.main(ActivityThread.java:7814) 
        at java.lang.reflect.Method.invoke(Native Method) 
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493) 
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1075) 
     Caused by: java.lang.ClassNotFoundException: Didn't find class "com.google.auth.oauth2.ServiceAccountCredentials" on path: DexPathList[[zip file "/data/app/mypackagename-IVppXU7KnFHxIENF0_Db1w==/base.apk"],nativeLibraryDirectories=[/data/app/mypackagename-IVppXU7KnFHxIENF0_Db1w==/lib/arm64, /data/app/mypackagename-IVppXU7KnFHxIENF0_Db1w==/base.apk!/lib/arm64-v8a, /system/lib64]]
        at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:196)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
        at io.spokestack.spokestack.google.GoogleSpeechRecognizer.<init>(GoogleSpeechRecognizer.java:66) 
        at java.lang.reflect.Constructor.newInstance0(Native Method)

I'm using the .json file from the service account configured in GCP.
What could be the issue here?

Thank you! 🙏

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.