Coder Social home page Coder Social logo

virtual-google-assistant's People

Contributors

armonge avatar dependabot[bot] avatar dmarvp avatar ecruzado avatar jkelvie avatar jperata avatar omenocal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtual-google-assistant's Issues

change launch intent / make configurable

Right now, launchRequest assumes the intent will be named "Default Welcome Intent" which, sure, may be DF suggests, but isn't necessarily true for someone like me (my welcome intent is 'welcome.json') but how about making this configurable or grepping for 'welcome' somewhere in the action package?

public launchRequest(): ActionRequest {
  this.requestType = RequestType.LAUNCH_REQUEST;
  return this.intentRequest("Default Welcome Intent");
}

Choose which intent we want for Launch Request

Is your feature request related to a problem? Please describe.
LaunchRequest is an Amazon intent for starting the app, the Default Welcome Intent is usually the one that would assume that role but you can remove it and create a different one in Google. We should have a way to set what we want as a LaunchRequest during the construction of the VGA instance.

Describe the solution you'd like
Have another method in Virtual Google Assistant builder that takes a string with the Intent you want to use when you specify the LaunchRequest, the default if that parameter is not set will still be the Default Welcome Intent

Describe alternatives you've considered
Find out which is the LaunchRequest in someway.

Additional context
Once this is changed, a parameter should be added to skill-tester-ml to support it.

Relates to #17

Keep the data and conversation object in the conversation context

Is your feature request related to a problem? Please describe.
the data and conversation object keep the state of some Actions resources. We should keep them

Describe the solution you'd like
Review some requests with @jperata to validate what kind of information should be sent. (I have some request and responses that can not be shared in public)

Describe alternatives you've considered
Use our current context maintenance, but is insufficient

Virtual Google Assistant Typescript issues on missing declarations on the Npm package

Description:

Reported by @armonge on gitter:

Hi guys, i've been trying to use virtual-google-assistant with a typescript project, seems like right now there are a couple declarations missing from the npm package

 File change detected. Starting incremental compilation... 
node_modules/virtual-google-assistant/lib/src/Index.d.ts:1:71 - error TS7016: Could not find a declaration file for module './VirtualGoogleAssistant'. '/home/armonge/workspace//node_modules/virtual-google-assistant/lib/src/VirtualGoogleAssistant.js' implicitly has an 'any' type. 1 export { VirtualGoogleAssistant, VirtualGoogleAssistantBuilder } from "./VirtualGoogleAssistant"; ~~~~~~~~~~~~~~~~~~~~~~~~~~ node_modules/virtual-google-assistant/lib/src/Index.d.ts:2:31 - error TS7016: Could not find a declaration file for module './ActionInteractor'. '/home/armonge/workspace//node_modules/virtual-google-assistant/lib/src/ActionInteractor.js' implicitly has an 'any' type. 2 export { RequestFilter } from "./ActionInteractor"; ~~~~~~~~~~~~~~~~~~~~ [17:21:08] Found 2 errors. Watching for file changes.

this happens when using
"strict" : true
in the tsconfig

Environment:

  • Version: TBD
  • OS: TBD
  • Node version: TBD

Steps To Reproduce

These issue needs to be reproduced first, but should just be to import the library on a project on typescript project and then add "strict": true on the tsconfig

Expected behavior

The library works as expected

Actual behavior

An exception is thrown due to missing definitions

Actions Builder support

As of today it has been 19 months since Google Actions Builder was announced (https://developers.googleblog.com/2020/06/announcing-actions-builder-actions-sdk.html). As far as I can tell, Virtual Google Assistant still only works with DialogFlow (specifically, because configuration requires a reference to the dialog flow folder).

I really enjoy testing with Virtual Alexa and have recently begun some work with Google Actions using Actions Builder and was hoping to do the same kind of testing for the actions I develop. I turned to Virtual Google Assistant and, unless I'm missing something, cannot use it for actions created with Actions Builder.

I could be misreading the situation, but it appears that Google is promoting Actions Builder as the successor to Dialogflow. Indeed, it is a much easier way to develop actions and I prefer it over Dialogflow. Therefore, it would make sense for Virtual Google Assistant (or some separate testing framework) to support Actions Builder.

Utterance clarification

Hello Juan,

This is the utterance in my intents/faq.json file. When the utterance is in this splitting format (Find me nearest store, then that particular utterance is not supported. Can you help me with what I am missing here ? I tried for both utter/intend and its not working for both.

{
"id": "cbfbaf26-548d-43c8-bb63-cdd73dad96b2",
"data": [
{
"text": "Find me ",
"userDefined": false
},
{
"text": "nearest store",
"alias": "faq",
"meta": "@Faq",
"userDefined": false
},
{
"text": "?",
"userDefined": false
}
],
"isTemplate": false,
"count": 0,
"updated": 1533239811
},

Thanks,
Angavai S.

TypeError: googleFunction is not a function

Description:

Rebuilding the example in the readme, following error occured:

(node:7520) UnhandledPromiseRejectionWarning: TypeError: googleFunction is not a function
    at Function.<anonymous> (C:\Users\..\lambda\node_modules\virtual-google-assistant\lib\src\Invoker.js:57:70)
    at Generator.next (<anonymous>)
    at C:\Users\..\lambda\node_modules\virtual-google-assistant\lib\src\Invoker.js:7:71
    at new Promise (<anonymous>)
...
    at C:\Users\..lambda\node_modules\virtual-google-assistant\lib\src\LocalFunctionInteractor.js:13:38)

Environment:

  • Version: 0.3.6
  • OS: Windows 10
  • Node version: 12.16.1

Code example

assistant.utter("help").then((payload) => {
    console.log("OutputSpeech: " + result.speech);"
});

Fix support for Google Cloud functions as well as lamdas

Description:

Support for Google Cloud functions broke with #60

Expected behavior

Google virtual assistant should support lambdas as well as google cloud functions.

Actual behavior

#60 added support for lambdas but broke support for Google cloud functions.

Additional context

As per John's comment:
#60 (comment)
We should be able to support both.

Followup Intent not picked up

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

User storage is erased when trigging follow up intent

Description:

User storage which is set by own handler is erased in next request for the follow up intent.

Environment:

  • Version: 0.2.5
  • OS: macOS Mojave
  • Node version: v10.12.0

Steps To Reproduce

Steps to reproduce the behavior:

  1. Request a intent:
    const response = await virtualGoogle.intend('Parent Intent');
  2. Set user storage in you intent handler:
    conv.user.storage.data = data;
  3. Request follow up intent, let say 'yes intent':
    response = await virtualGoogle.intend('Parent Intent - yes')
    User storage which was set in 'Parent Intent' is erased after calling 'Parent Intent - yes' in the same session.

Expected behavior

I expect that user storage which is set during the lifetime and session is not erased. It should stay the same during the same test scenario.

Actual behavior

I do not know what happens, but it seems that the same instance of 'virtualGoogle' resets everything from the last intent. Even if nothing is recreated.

Code example

  1. Request a intent:
    const response = await virtualGoogle.intend('Parent Intent');
  2. Set user storage in you intent handler:
    conv.user.storage.data = data;
  3. Request follow up intent, let say 'yes intent':
    response = await virtualGoogle.intend('Parent Intent - yes')
    User storage which was set in 'Parent Intent' is erased after calling 'Parent Intent - yes' in the same session.

Additional context

Is that a bug or is it the correct behavior?

Assistant response not returned by VirtualGoogleAssistant object

We followed the example given in the docs (slightly modified):

const vga = require("virtual-google-assistant");
const assistant = vga.VirtualGoogleAssistant.Builder()
    .handler("index.handler") // Google Cloud Function file and name
    .directory("./dialogFlowFolder") // Path to the Dialog Flow exported model
    .create();

assistant.utter("KEYWORD").then((payload) => {
    console.log("OutputSpeech: " + result.speech);
    console.log("OutputSpeech: " + payload);
});

When we run our example test, we are able to see the proper response being logged in the respective files down the stack, but the log statements above in the tests are never triggered, thus making us unable to assert anything in our test.

We also tried the following

let response = virtualAssistant.utter('KEYWORD').then((payload) => {
      return payload;
    });
    console.log('response = ' + JSON.stringify(response));

response is printed as '{}'

It seems that we are close, but not sure if this is a bug or error in the documentation (or we are doing something wrong) - any ideas?

[feature] support google actions v2 API

I know this library current supports v1 and that many (if not most) actions are built on it, but you can no longer build v1 actions, and the new media response object (long form audio player) is only available with v2.

Match response format from simulator

Using the new 0.0.7 release on npm, I am getting the following output from utter (talk to SKILL_NAME):

{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "<speak>Thanks for listening! You can exit by telling Google Goodbye, and you can skip to the next story by telling Google Next.: \n <audio src='http://foo.bar/baz.mp3'></audio><sub><desc>\nToday from Foo...</desc></sub></speak>"
            }
          }
        ],
        "suggestions": [
          {
            "title": "More stories"
          },
          {
            "title": "I'm done"
          }
        ]
      },
      "userStorage": "{\"data\":{\"returning\":true}}"
    }
  },
  "outputContexts": [
    {
      "name": "1518537462114/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{\"nowPlayingIndex\":0,\"streak\":0,\"setlistParams\":{\"type\":\"branded\",\"platform\":\"google\",\"publisher\":\"SLDEMO\"},\"setlist\":"FOO_BIG_OLD_JSON_BLOB_IAM_OBFUSCATING_AWAY_BUT_ITS_BIG_AND_NESTED"}"
      }
    }
  ]
}

It's awesome that the outputContext portion is included, so i can check on my user data storage, but, this is what the Actions simulator gives me:

{
  "conversationToken": "[\"_actions_on_google\"]",
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "Thanks for listening! You can exit by telling Google Goodbye, and you can skip to the next story by telling Google Next."
              }
            },
            {
              "mediaResponse": {
                "mediaType": "AUDIO",
                "mediaObjects": [
                  {
                    "description": "The latest from Bakersfield Californian",
                    "contentUrl": "https://s3.amazonaws.com/audio.spokenlayer.com/bakersfield/2018/06/5e565e2b-3a7f-4e0e-9ef0-a14cf2ecca4a/audio-5e565e2b-3a7f-4e0e-9ef0-a14cf2ecca4a-encodings.mp3",
                    "icon": {
                      "url": "https://res.cloudinary.com/spokenlayer/image/upload/d_SpokenLayer_Logo_Verticle_fb9a1b.png/t_google_large/cover_art/bakersfield.png"
                    }
                  }
                ]
              }
            }
          ],
          "suggestions": [
            {
              "title": "More stories"
            },
            {
              "title": "I'm done"
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "assistant.intent.action.TEXT"
        }
      ]
    }
  ],
  "responseMetadata": {
    "status": {
      "message": "Success (200)"
    },
    "queryMatchInfo": {
      "queryMatched": true,
      "intent": "8181e2e9-3d9f-428d-b220-3b6616de7151"
    }
  },
  "userStorage": "{\"data\":{\"returning\":true}}"
}

As you can see, some of the fields are missing or nested differently. I know the mediaObject isn't supported by your package quite yet, but this missing meta data / token / debug info is important.

Followup Intent not picked up

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

How to check the response for Intents with permission.

Please look into the below issue,
I am completely new to voice, so I may sound silly.
I tried your virtual Google assistant and it's awesome and so helpful and.
I am facing the below issue and Can you suggest me how to proceed.
When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.

it('should return Store Hours Intent', async () => {
ga.utter("store hours").then((payload) => {
console.log("OutputSpeech: " + JSON.stringify(payload));

After receiving the response, how can I set the permission through unit testing code.
Thanks in advance.

OutputSpeech: {"payload":{"google":{"expectUserResponse":true,"richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}]},"userStorage":"{"data":{}}","systemIntent":{"intent":"actions.intent.PERMISSION","data":{"@type":"type.googleapis.com/google.actions.v2.PermissionValueSpec","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}},"outputContexts":[{"name":"1537461222114/contexts/_actions_on_google","lifespanCount":99,"parameters":{"data":"{}"}}]}

Thanks,
Angavai S.

Requests fail in DialogFlow V2 SDK due intent.name

Description:

When running a correct script that has the webhook made with virtual-google-assistant

Environment:

  • Version: 0.3.0

Steps To Reproduce

Steps to reproduce the behavior:

  1. Setup a Google action using DialogFlow V2 SDK
  2. Run a test for a specific intent different than the Default Welcome Intent

Expected behavior

Test run correctly

Actual behavior

The following error appears:

Dialogflow IntentHandler not found for intent:
Error: Dialogflow IntentHandler not found for intent:

Additional context

This happens because we set the intent.name as a UUID and set the intent name in the intent.displayName, using a filter to set the intent.name to intent.displayName works.

Context maintained in Dialog flow is not getting picked up

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Recently I understood that the context that has been maintained in Dialog Flow is not getting picked up. After I tried saving the context in the code, I am getting response. Is that feature available or am I missing something ?

Will the library support conversational test cases

Hi Juan,

Thanks for all your help. I do have one more question.
Since I am learning and I am a tester.
I am learning Unit testing now.
So the questions may sound silly.

Do the library support conversational testing ?
If the conversation does not contain the intents.

describe(ā€˜Location Details and change location, function () {

it('it should return welcome intent', function () {
 	res2 = await ga.utter(ā€˜open Starbucksā€™);
     });

it('it should return location intent', function () {
	res2 = await ga.utter(ā€˜get me nearest storeā€™);
      });

// The below two are not intents.
it('it should return different location question', function () {
   res2 = await ga.utter(ā€˜different Locationā€™);
});

it('it should return different location detailsā€™, function () {
   res2 = await ga.utter(ā€˜Walnut Creek CAā€™);
});

});

Thanks,
Angavai S.

Add context support

Ensure context stays as long as the instance is alive, base it on virtual alexa and if it's similar enough move it to virtual-core all together.

Testing Authentication : Do we have scripts to test the Authentication part.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.