bespoken / virtual-device-sdk Goto Github PK
View Code? Open in Web Editor NEW:mute: Interact with Alexa Voice Service (AVS) without speaking
Home Page: https://bespoken.io
:mute: Interact with Alexa Voice Service (AVS) without speaking
Home Page: https://bespoken.io
voice comparisons should be case insensitive, right now the skill responses are being changed to lower case but the expected values are not, which results in false cases of bad validation results.
When we cannot detect a skill name in the first command, we say:
Security token lacks sufficient permissions to invoke "" skill.
Instead, if the skill name we come up with is blank, we should instruct the user to use the "open skill" syntax. For example. It should say instead:
The first input should open the skill - e.g. 'open my skill'
From: https://github.com/bespoken/virtual-device/issues/260
The stt flag should also be exposed via the virtual device SDK. It should be able to be set similar to the other optional flags on the virtual device:
https://github.com/bespoken/virtual-device-sdk/blob/master/src/VirtualDevice.ts#L12
Values specified in our homophone list should be matched case-insensitively.
For example, if someone defines a homophone like this:
{
homophones: {
"Kelvie": ["Kelly"]
}
}
We should replace either "kelly" or "Kelly" with Kelvie in the transcript from the virtual device. This is just a matter of setting the i
flag on the regexp here:
https://github.com/bespoken/virtual-device-sdk/blob/master/src/VirtualDevice.ts#L398
SilentEcho -> VirtualDevice
SilentEchoScript -> VirtualDeviceScript
SilentEchoValidator -> VirtualDeviceValidator
Relates to https://github.com/bespoken/technical-debt/issues/2
Is your feature request related to a problem? Please describe.
Once we have an asynchronous batch process, passing a conversation id parameter allows to use an specific conversation id, this is useful to allow the use from skill testing or similar as a sync process but still be able to check the status on a separate process.
Describe the solution you'd like
Add a optional conversation id parameter to the batch_process exported function
Describe alternatives you've considered
Search by token and timestamp is not really efficient for live polling of the process status.
Additional context
Requires bespoken/virtual-device#198 is finished as a prerequisite
Create prettified HTML output as part of running a SilentEchoScript.
It should be broken down by sequence, with each test within the sequence listed.
Tests that are a success should be output with a checkmark and a green font.
Tests that are a failure should be output with an "X" mark and a red font.
For the following input script:
"Open We Study Billionaires": "Welcome"
"Play": http://first.com/file.mp3
"Next": http://second.com/file.mp3
"Open We Study Billionaires": "Hi"
Output:
Overall:
4 tests, 3 succeeded, 1 failed
Time:
7/5/2017 11:02:00 UTC
Sequence 1:
Result | Input | Expected | Actual |
---|---|---|---|
✔ | "Open We Study Billionaires" | "Welcome" | "Welcome to We Study Billionaires" |
✖ | "Play" | http://first.com/file.mp3 | http://first.com/file2.mp3 |
✔ | "Next" | http://second.com/file.mp3 | http://second.com/file.mp3 |
Sequence 2:
Result | Input | Expected | Actual |
---|---|---|---|
✔ | "Open We Study Billionaires" | "Welcome" | "Welcome to We Study Billionaires" |
Note - this is a low-fidelity mockup - please feel free to suggest improvements for the real UI.
The Validator should have a method "prettifyAsHTML(result: ValidatorResult): string"
The method takes the validator result and returns the HTML string.
For some reason, no does not get recognized for me, but "alexa no" does.
When the user enters no, we should automatically translate it to Alexa no.
To test this, try this sequence:
"tell we study billionaires to play"
"open we study billionaires"
"no"
It does not recognize the "no" in response to the yes or no question from We Study Billionaires ("Would you like to resume?").
Saying "alexa no" does work though.
The message parser, when checking for skill access, does not look for all the skill open synonyms. They are:
Launch
Tell
Ask
Also, they are being handled in a case-sensitive way - they should not be. "Open" and "open" should both be accepted.
The error should say that the trial token is expired
The error is a general undefined error
Is your feature request related to a problem? Please describe.
When using tests for Google virtual devices, the session does not end in between calls, so consecutive tests of the same interaction will fail if the google action is not exited explicitly.
Describe the solution you'd like
The virtual device endpoints have a parameter called new_conversation (must be set to true) that will force start a new conversation with the Google Assistant SDK. We should expose that parameter as we do for message, locale, voiceId and phrases for sequential validation. That way, we can set it to true on the first interaction with the virtual device, and false on the following.
As for batch validation, we should always set it to true.
Additional context
Please inform @AnkRaiza when a new release is done so that we can include this in dashboard.
The validator will take a series of tests and execute them using the SilentEcho service.
A test sequence is made up of an array of ValidatorTest objects.
ValidatorTest has the following properties:
For now, the only value for comparison is "contains".
The signature of the Validator will require the following methods:
Test output will be in well structured array of objects, one for each test. Each test result object will have:
Additionally, there should be a overall result object (ValidatorResult) that contains:
The Validator objects are part of the definitions for the silent-echo-sdk library, and should be exported as part of Index.ts file.
Whenever I use the node.js SDK, the audio response is empty.
Right now before publishing we have to manually build the TS files to ensure the latest version of the js files are published. Adding a preversion step should avoid posible publishing errors.
**Is your feature request related to a problem? **
https://github.com/bespoken/virtual-device/issues/287
Description
Add a "screenMode" property and tests for it that will set the "screenMode" parameter when calling the virtual device process or batch process endpoints.
To close out any existing session that may have been running
We should add an "addListener" that calls back with these events:
ON_MESSAGE
Data:
phrase uttered
ON_RESULT
Data:
payload from silent echo for the call
Is your feature request related to a problem? Please describe.
There is no documentation on the latest added support for conversation
Describe the solution you'd like
documentation on async mode, how it changes the batch_process method and the new getConversationResults method
In German the utterance Ja (yes) is not correctly processed. It works well using a longer utterance like "alexa ja". To workaroung this VD should automatically add "alexa " in front of one-word utterances.
Is your feature request related to a problem? Please describe.
I'd like to be able to control which account I'm linked to before I initiate the test.
Describe the solution you'd like
It would be great if there was an additional parameter that would let me pass in a bearer token (similar to how you can with the unit test library), that would allow me to conduct the account linking on my own through my own scripting means and then to simply pass the bearer token that I want the skill to utilize.
Describe alternatives you've considered
I'm not aware of any other way to accomplish this. In fact, I'm not even sure if it's possible.
Return the card URL in the payload
bespoken/silent-echo-sdk
-> bespoken/virtual-device-sdk
.bespoken/virtual-device-sdk
.Seems we are using some >4 features. No need for this.
When a stream is returned in the streamURL field, verify that the stream is actually active.
This involves actually loading the stream and seeing that it returns data. Redirects should be followed.
If the end result is not a 200 or data being returned, it should be considered a failure.
When using messages and phrases like "what is rock & roll?", they are being sent to the virtual device as it is. This means, that they are interpreted as parameters and so we only receive "what is rock".
Steps to reproduce the behavior:
The message and phrases should be correctly encoded.
The message and phrases are not correctly encoded and are sent as they are.
This ticket was born from https://github.com/bespoken/dashboard/issues/1014
The solution should be to encode message and phrases here:
virtual-device-sdk/src/VirtualDevice.ts
Line 41 in e0a70a9
virtual-device-sdk/src/VirtualDevice.ts
Line 46 in e0a70a9
Please write tests to confirm it works.
The current implementation works only when virtual device is in an https environment. The port is hardcoded to 443:
virtual-device-sdk/src/VirtualDevice.ts
Line 117 in 7d720fc
We need to change it so that it also works with custom urls that don't use HTTPS.
Steps to reproduce the behavior:
The response should be returned correctly.
We get an error response.
We need to remove the hardcoded 443 port on VirtualDevice.ts and probably change the library used from https to http.
Also update bst with a new version once this is done.
the expected vs the actual response comparison while validating should be case insensitive.
@chris-ramon Can we provide an error message if the Vendor ID is invalid?
What do we get back from SMAPI if the vendor ID is bad?
Add a new class in the SDK called DeviceLocation, its constructor should receive two parameters Lat and Lng. These will correspond with the lat and lng parameters described here:
https://github.com/bespoken/virtual-device/issues/274
Use a decimal type for both parameters and add tests for it.
Hey all. Playing with the SDK and noticed that I was unable to extract the "caption" property from the IVirtualDeviceResult type as shown in the docs.
"hi": "*"
:
character, therefore the next script must be valid:
"hi" : "*"
The SilentEchoScript component provides an easy way to interact with the SilentEchoValidator. It will take a simple text file that describes a series of tests and execute them using the SilentEchoValidator service.
The format of a Validator Script will be:
<Input>: <ExpectedOutput>
<Input>: <ExpectedOutput>
<Input>: <ExpectedOutput>
...
An input is a phrase in quotes:
"Hi"
"Hello There"
An expected output is either a phrase or URL:
"Hi"
http://test.com/my/audio/file.mp3
A blank line represents a new sequence of tests. Tests that are part of a sequence will be run as part of a single session.
"Open We Study Billionaires": "Welcome"
"Play": http://first.com/file.mp3
"Next": http://second.com/file.mp3
"Open We Study Billionaires": "Hi"
In this test, the skill We Study Billionaires will be issued the three commands "Open", "Play" and "next" in quick succession.
The second "Open" will be sent after a pause. It will act after the first session has concluded.
The exact duration of the pause between sessions needs to be determined - we do not have precise session-handling at this point. I would suggest starting with a value of 10 seconds.
All tests will use an implied "contains" match. If the actual output contains the expected output, it will be considered a match.
URL comparisons will be performed against the Stream URL.
All comparisons are case-insensitive (note: this is a feature of the Validator - nothing needs to be done by the Script component to implement this).
The test output will be the ValidatorResult object from the SilentEchoValidator
The signature of the SilentEchoScript will require the following methods:
Constructor with token
The token is used for accessing the Silent Echo virtual device
execute(scriptContents: string)
The contents of a test to execute
validate(scriptContents: string): (string | undefined)
Run this to ensure that the test script is in a valid format. Undefined means it is valid, a string contains the error message, if there is an error.
The Validator objects are part of the definitions for the silent-echo-sdk library, and should be exported as part of Index.ts file.
As part of the migration to circle ci 2.0, had to skip some test because they were failing
test/VirtualDeviceTest.ts
TBD
If the user specifies a line like:
"Hi": "*"
or
"Hi": ""
It should just run the command without doing any test on it (it should just show the output and automatically flag it as green).
Is your feature request related to a problem? Please describe.
We can send audio in base64 format inside the payload for our virtual-device. We want to make it easier by adding this inside this library.
Describe the solution you'd like
Our message and batch process should be able to receive an audio file, it can have two different inputs
Since our inputs don't have to match 1 to 1 with what we send to virtual device we can have the IMessage interface have two optional values: audioPath and audioURL, depending on which one is present we can process and convert to a payload to send to virtual device.
For the batch process, the audio should be processed in parallel.
Is your feature request related to a problem? Please describe.
https://github.com/bespoken/virtual-device/issues/285
Describe the solution you'd like
Add an optional parameter when calling the virtual device endpoint for a client. The default value should be "sdk" but it should allow overwriting that value.
Talk to Juan and Joel about this.
We want to move more things to our rancher environment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.