bespoken / virtual-google-assistant Goto Github PK
View Code? Open in Web Editor NEWš¤ Easily test and debug Actions on Google programmatically https://bespoken.io
License: Apache License 2.0
š¤ Easily test and debug Actions on Google programmatically https://bespoken.io
License: Apache License 2.0
Once #8 is done, we can do a basic test script of voice dex, allowing us to validate that generated Json's work well against a Google app
Right now, launchRequest
assumes the intent will be named "Default Welcome Intent" which, sure, may be DF suggests, but isn't necessarily true for someone like me (my welcome intent is 'welcome.json') but how about making this configurable or grepping for 'welcome' somewhere in the action package?
public launchRequest(): ActionRequest {
this.requestType = RequestType.LAUNCH_REQUEST;
return this.intentRequest("Default Welcome Intent");
}
Is your feature request related to a problem? Please describe.
LaunchRequest is an Amazon intent for starting the app, the Default Welcome Intent is usually the one that would assume that role but you can remove it and create a different one in Google. We should have a way to set what we want as a LaunchRequest during the construction of the VGA instance.
Describe the solution you'd like
Have another method in Virtual Google Assistant builder that takes a string with the Intent you want to use when you specify the LaunchRequest, the default if that parameter is not set will still be the Default Welcome Intent
Describe alternatives you've considered
Find out which is the LaunchRequest in someway.
Additional context
Once this is changed, a parameter should be added to skill-tester-ml to support it.
Relates to #17
Is your feature request related to a problem? Please describe.
the data and conversation object keep the state of some Actions resources. We should keep them
Describe the solution you'd like
Review some requests with @jperata to validate what kind of information should be sent. (I have some request and responses that can not be shared in public)
Describe alternatives you've considered
Use our current context maintenance, but is insufficient
Reported by @armonge on gitter:
Hi guys, i've been trying to use virtual-google-assistant with a typescript project, seems like right now there are a couple declarations missing from the npm package
File change detected. Starting incremental compilation...
node_modules/virtual-google-assistant/lib/src/Index.d.ts:1:71 - error TS7016: Could not find a declaration file for module './VirtualGoogleAssistant'. '/home/armonge/workspace//node_modules/virtual-google-assistant/lib/src/VirtualGoogleAssistant.js' implicitly has an 'any' type. 1 export { VirtualGoogleAssistant, VirtualGoogleAssistantBuilder } from "./VirtualGoogleAssistant"; ~~~~~~~~~~~~~~~~~~~~~~~~~~ node_modules/virtual-google-assistant/lib/src/Index.d.ts:2:31 - error TS7016: Could not find a declaration file for module './ActionInteractor'. '/home/armonge/workspace//node_modules/virtual-google-assistant/lib/src/ActionInteractor.js' implicitly has an 'any' type. 2 export { RequestFilter } from "./ActionInteractor"; ~~~~~~~~~~~~~~~~~~~~ [17:21:08] Found 2 errors. Watching for file changes.
this happens when using
"strict" : true
in the tsconfig
These issue needs to be reproduced first, but should just be to import the library on a project on typescript project and then add "strict": true on the tsconfig
The library works as expected
An exception is thrown due to missing definitions
As of today it has been 19 months since Google Actions Builder was announced (https://developers.googleblog.com/2020/06/announcing-actions-builder-actions-sdk.html). As far as I can tell, Virtual Google Assistant still only works with DialogFlow (specifically, because configuration requires a reference to the dialog flow folder).
I really enjoy testing with Virtual Alexa and have recently begun some work with Google Actions using Actions Builder and was hoping to do the same kind of testing for the actions I develop. I turned to Virtual Google Assistant and, unless I'm missing something, cannot use it for actions created with Actions Builder.
I could be misreading the situation, but it appears that Google is promoting Actions Builder as the successor to Dialogflow. Indeed, it is a much easier way to develop actions and I prefer it over Dialogflow. Therefore, it would make sense for Virtual Google Assistant (or some separate testing framework) to support Actions Builder.
Hello Juan,
This is the utterance in my intents/faq.json file. When the utterance is in this splitting format (Find me nearest store, then that particular utterance is not supported. Can you help me with what I am missing here ? I tried for both utter/intend and its not working for both.
{
"id": "cbfbaf26-548d-43c8-bb63-cdd73dad96b2",
"data": [
{
"text": "Find me ",
"userDefined": false
},
{
"text": "nearest store",
"alias": "faq",
"meta": "@Faq",
"userDefined": false
},
{
"text": "?",
"userDefined": false
}
],
"isTemplate": false,
"count": 0,
"updated": 1533239811
},
Thanks,
Angavai S.
Rebuilding the example in the readme, following error occured:
(node:7520) UnhandledPromiseRejectionWarning: TypeError: googleFunction is not a function
at Function.<anonymous> (C:\Users\..\lambda\node_modules\virtual-google-assistant\lib\src\Invoker.js:57:70)
at Generator.next (<anonymous>)
at C:\Users\..\lambda\node_modules\virtual-google-assistant\lib\src\Invoker.js:7:71
at new Promise (<anonymous>)
...
at C:\Users\..lambda\node_modules\virtual-google-assistant\lib\src\LocalFunctionInteractor.js:13:38)
assistant.utter("help").then((payload) => {
console.log("OutputSpeech: " + result.speech);"
});
Support for Google Cloud functions broke with #60
Google virtual assistant should support lambdas as well as google cloud functions.
#60 added support for lambdas but broke support for Google cloud functions.
As per John's comment:
#60 (comment)
We should be able to support both.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Start a local server based on a google function similiar to what we do on bst proxy function.
User storage which is set by own handler is erased in next request for the follow up intent.
Steps to reproduce the behavior:
const response = await virtualGoogle.intend('Parent Intent');
conv.user.storage.data = data;
response = await virtualGoogle.intend('Parent Intent - yes')
I expect that user storage which is set during the lifetime and session is not erased. It should stay the same during the same test scenario.
I do not know what happens, but it seems that the same instance of 'virtualGoogle' resets everything from the last intent. Even if nothing is recreated.
const response = await virtualGoogle.intend('Parent Intent');
conv.user.storage.data = data;
response = await virtualGoogle.intend('Parent Intent - yes')
Is that a bug or is it the correct behavior?
Documentation is needed for examples of use and different methods available.
We followed the example given in the docs (slightly modified):
const vga = require("virtual-google-assistant");
const assistant = vga.VirtualGoogleAssistant.Builder()
.handler("index.handler") // Google Cloud Function file and name
.directory("./dialogFlowFolder") // Path to the Dialog Flow exported model
.create();
assistant.utter("KEYWORD").then((payload) => {
console.log("OutputSpeech: " + result.speech);
console.log("OutputSpeech: " + payload);
});
When we run our example test, we are able to see the proper response being logged in the respective files down the stack, but the log statements above in the tests are never triggered, thus making us unable to assert anything in our test.
We also tried the following
let response = virtualAssistant.utter('KEYWORD').then((payload) => {
return payload;
});
console.log('response = ' + JSON.stringify(response));
response is printed as '{}'
It seems that we are close, but not sure if this is a bug or error in the documentation (or we are doing something wrong) - any ideas?
I know this library current supports v1 and that many (if not most) actions are built on it, but you can no longer build v1 actions, and the new media response object (long form audio player) is only available with v2.
Using the new 0.0.7
release on npm, I am getting the following output from utter (talk to SKILL_NAME)
:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>Thanks for listening! You can exit by telling Google Goodbye, and you can skip to the next story by telling Google Next.: \n <audio src='http://foo.bar/baz.mp3'></audio><sub><desc>\nToday from Foo...</desc></sub></speak>"
}
}
],
"suggestions": [
{
"title": "More stories"
},
{
"title": "I'm done"
}
]
},
"userStorage": "{\"data\":{\"returning\":true}}"
}
},
"outputContexts": [
{
"name": "1518537462114/contexts/_actions_on_google",
"lifespanCount": 99,
"parameters": {
"data": "{\"nowPlayingIndex\":0,\"streak\":0,\"setlistParams\":{\"type\":\"branded\",\"platform\":\"google\",\"publisher\":\"SLDEMO\"},\"setlist\":"FOO_BIG_OLD_JSON_BLOB_IAM_OBFUSCATING_AWAY_BUT_ITS_BIG_AND_NESTED"}"
}
}
]
}
It's awesome that the outputContext portion is included, so i can check on my user data storage, but, this is what the Actions simulator gives me:
{
"conversationToken": "[\"_actions_on_google\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Thanks for listening! You can exit by telling Google Goodbye, and you can skip to the next story by telling Google Next."
}
},
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"description": "The latest from Bakersfield Californian",
"contentUrl": "https://s3.amazonaws.com/audio.spokenlayer.com/bakersfield/2018/06/5e565e2b-3a7f-4e0e-9ef0-a14cf2ecca4a/audio-5e565e2b-3a7f-4e0e-9ef0-a14cf2ecca4a-encodings.mp3",
"icon": {
"url": "https://res.cloudinary.com/spokenlayer/image/upload/d_SpokenLayer_Logo_Verticle_fb9a1b.png/t_google_large/cover_art/bakersfield.png"
}
}
]
}
}
],
"suggestions": [
{
"title": "More stories"
},
{
"title": "I'm done"
}
]
}
},
"possibleIntents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "8181e2e9-3d9f-428d-b220-3b6616de7151"
}
},
"userStorage": "{\"data\":{\"returning\":true}}"
}
As you can see, some of the fields are missing or nested differently. I know the mediaObject isn't supported by your package quite yet, but this missing meta data / token / debug info is important.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Please look into the below issue,
I am completely new to voice, so I may sound silly.
I tried your virtual Google assistant and it's awesome and so helpful and.
I am facing the below issue and Can you suggest me how to proceed.
When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.
it('should return Store Hours Intent', async () => {
ga.utter("store hours").then((payload) => {
console.log("OutputSpeech: " + JSON.stringify(payload));
After receiving the response, how can I set the permission through unit testing code.
Thanks in advance.
OutputSpeech: {"payload":{"google":{"expectUserResponse":true,"richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}]},"userStorage":"{"data":{}}","systemIntent":{"intent":"actions.intent.PERMISSION","data":{"@type":"type.googleapis.com/google.actions.v2.PermissionValueSpec","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}},"outputContexts":[{"name":"1537461222114/contexts/_actions_on_google","lifespanCount":99,"parameters":{"data":"{}"}}]}
Thanks,
Angavai S.
When running a correct script that has the webhook made with virtual-google-assistant
Steps to reproduce the behavior:
Test run correctly
The following error appears:
Dialogflow IntentHandler not found for intent:
Error: Dialogflow IntentHandler not found for intent:
This happens because we set the intent.name as a UUID and set the intent name in the intent.displayName, using a filter to set the intent.name to intent.displayName works.
Generate the JSON request as it was DialogFlow generating it.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Recently I understood that the context that has been maintained in Dialog Flow is not getting picked up. After I tried saving the context in the code, I am getting response. Is that feature available or am I missing something ?
Hi Juan,
Thanks for all your help. I do have one more question.
Since I am learning and I am a tester.
I am learning Unit testing now.
So the questions may sound silly.
Do the library support conversational testing ?
If the conversation does not contain the intents.
describe(āLocation Details and change location, function () {
it('it should return welcome intent', function () {
res2 = await ga.utter(āopen Starbucksā);
});
it('it should return location intent', function () {
res2 = await ga.utter(āget me nearest storeā);
});
// The below two are not intents.
it('it should return different location question', function () {
res2 = await ga.utter(ādifferent Locationā);
});
it('it should return different location detailsā, function () {
res2 = await ga.utter(āWalnut Creek CAā);
});
});
Thanks,
Angavai S.
Ensure context stays as long as the instance is alive, base it on virtual alexa and if it's similar enough move it to virtual-core all together.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ššš
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ā¤ļø Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.