Coder Social home page Coder Social logo

wit-ios's Introduction

wit-ios Build Status

This repository is community-maintained. We gladly accept pull requests. Please see the Wit HTTP Reference for all supported endpoints.

The wit.ai iOS SDK is the easiest way to integrate wit.ai features into your iOS application.

The client lets you capture intents and entities from:

  • the microphone of the device (GET /message API only)
  • text

Supports both the /converse and the /message API. Note: the /converse (story) based API has been deprecated - see our blog post for a migration plan.

Link to this library

Using CocoaPods

Add the following dependency to your Podfile:

pod 'Wit', '~> 4.2.1'

And then run the following command in your project home directory:

pod install

API

@property and static methods
Delegate to send feedback for the application
@property(nonatomic, strong) id <WitDelegate> delegate;
Access token used to contact Wit.ai
@property (strong) NSString* accessToken;
Configure the voice activity detection algorithm:
- WITVadConfigDisabled
- WITVadConfigDetectSpeechStop (default)
- WITVadConfigFull
@property WITVadConfig detectSpeechStop;
Set the maximum length of time recorded by the VAD in ms
- Set to -1 for no timeout
- Defaults to 7000
@property int vadTimeout;
Set VAD sensitivity (0-100)
- Lower values are for strong voice signals like for a cellphone or personal mic
- Higher values are for use with a fixed-position mic or any application with voice burried in ambient noise
- Defaults to 0
@property int VadSensitivity;
Singleton instance accessor.
+ (Wit*)sharedInstance;
Understanding text
InterpretString
Sends an NSString to /message wit.ai for interpretation. Same as sending a voice input, but with text. This uses the legacy GET /message API. If you are using stories this is NOT for you.
- (void) interpretString: (NSString *) string customData:(id)customData;
ConverseString (deprecated)
Sends an NSString to /converse wit.ai for interpretation. Will call delegate methods for every step of your story.
- (void) converseWithString:(NSString *)string witSession: (WitSession *) session;
Recording audio

If you provide a WitSession to the WitMicButton.session then Wit-iOS-SDK will use the /converse endpoint (stories), else the /message endpoint will be used

Make sure to set Wit's speechRecognitionLocale to the same language as your Wit model. The default value is en-US (American English)

Starts a new recording session. [self.delegate witDidGraspIntent:…] will be called once completed.
- (void)start;
Same as the start method but allow a custom object to be passed, which will be passed back as an argument of the
[self.delegate witDidGraspIntent:customData:(id)customData]. This is how you should link a request to a response, if needed.
- (void)start: (id)customData;
Stops the current recording if any, which will lead to [self.delegate witDidGraspIntent:…] call.
- (void)stop;
Start / stop the audio processing. Once the API response is received, [self.delegate witDidGraspIntent:…] method will be called.
- (void)toggleCaptureVoiceIntent;
Same as toggleCaptureVoiceIntent, allowing you to pass a customData object to the [self start:(id)customData] function.
- (void)toggleCaptureVoiceIntent:(id) customData;
YES if Wit is recording.
- (BOOL)isRecording;
Context
Sets context from NSDictionary. Merge semantics!
See the context documentation in our doc for for more information: Context documentation
- (void)setContext:(NSDictionary*)dict;
Returns the current context.
- (NSDictionary*)getContext;
Implementing the WitDelegate protocol
@protocol WitDelegate <NSObject>



@optional

/**
 DEPRECATED: Called when your story triggers an action and includes any new entities from Wit. Update session.context with any keys required for the next step of the story and return it here, wit-ios-sdk will automatically perform the next converse request for you and call the appropriate delegate method.

 @param action The action to perform, as specified in your story.
 @param entities Any entities Wit found, as specified in your story.
 @param session The previous WitSession object. Update session.context with any context changes (these will be sent to the Wit server) and optionally store any futher data in session.customData (this will not be sent to the Wit server) and return this WitSession.
 @param confidence The confidence that Wit correctly guessed the users intent, between 0.0 and 1.0
 @return The WitSession to continue. Update the session parameter and return it. Returning nil is considered an error.
 */
- (WitSession *) didReceiveAction: (NSString *) action entities: (NSDictionary *) entities witSession: (WitSession *) session confidence: (double) confidence;

/**
 DEPRECATED: Called when your story wants your app to display a message. Update session.context with any keys required for the next step of the story and return it here, wit-ios-sdk will automatically perform the next converse request for you and call the appropriate delegate method. wit-ios-sdk will automatically perform the next converse request for you and call the appropriate delegate method.

 @param message The message to display
 @param session The previous WitSession object. Update session.context with any context changes (these will be sent to the Wit server) and optionally store any futher data in session.customData (this will not be sent to the Wit server) and return this WitSession.
 @param confidence The confidence that Wit correctly guessed the users intent, between 0.0 and 1.0
 @return The WitSession to continue. Update the session parameter and return it. Returning nil is considered an error.
 */
- (WitSession *) didReceiveMessage: (NSString *) message quickReplies: (NSArray *) quickReplies witSession: (WitSession *) session confidence: (double) confidence;

/**
 DEPRECATED: Called when your story has completed.

 @param session The WitSession passed in from your last delegate call.
 */
- (void) didStopSession: (WitSession *) session;

/**
 * Called when a Wit request is completed. This is only called for  calls to interpretString (which uses the  get /message API). If you are using deprecated Wit stories (the post /converse API), use didReceiveAction, didReceiveMessage and didReceiveStop instead.
 * param outcomes a NSDictionary of outcomes returned by the Wit API. Outcomes are ordered by confidence, highest first. Each outcome contains (at least) the following keys:
 *       intent, entities[], confidence, _text. For more information please refer to our online documentation: https://wit.ai/docs/http/20141022#get-intent-via-text-link
 *
 * param messageId the message id returned by the api
 * param customData any data attached when starting the request. See [Wit sharedInstance toggleCaptureVoiceIntent:... (id)customData] and [[Wit sharedInstance] start:... (id)customData];
 * param error Nil if no error occurred during processing
 */
- (void)witDidGraspIntent:(NSArray *)outcomes messageId:(NSString *)messageId customData:(id)customData error:(NSError *)error;

/**
 * When using the hands free voice activity detection option (WITVadConfigFull), this callback will be called when the microphone started to listen
 * and is waiting to detect voice activity in order to start streaming the data to the Wit API.
 * This function will not be called if the [Wit sharedInstance].detectSpeechStop is not equal to WITVadConfigFull
 */
- (void)witActivityDetectorStarted;

/**
 * Called when the streaming of the audio data to the Wit API starts.
 * The streaming to the Wit API starts right after calling one of the start methods when
 * detectSpeechStop is equal to WITVadConfigDisabled or WITVadConfigDetectSpeechStop.
 * If detectSpeechStop is equal to WITVadConfigFull, the streaming to the Wit API starts only when the SDK
 * detected a voice activity.
 */
- (void)witDidStartRecording;

/**
 Called when Wit stops recording the audio input.
 */
- (void)witDidStopRecording;

/**
 Called when Wit detects speech from the audio input.
 */
- (void)witDidDetectSpeech;

/**
 Called whenever Wit receives an audio chunk. The format of the returned audio is 16-bit PCM, 16 kHz mono.
 */
- (void)witDidGetAudio:(NSData *)chunk;
/**
 Called whenever SFSpeech sends a recognition preview of the recording.
 */
- (void) witDidRecognizePreviewText: (NSString *) previewText;

- (void) witReceivedRecordingError: (NSError *) error;

@end
Notifications
// A NSNotification is sent on the default center when the power of the audio signal changes
NSNumber *newPower = [[NSNumber alloc] initWithFloat:power];
[[NSNotificationCenter defaultCenter] postNotificationName:kWitNotificationAudioPowerChanged object:newPower];        
Constants
static NSString* const kWitNotificationAudioPowerChanged = @"WITAudioPowerChanged";
static int const kWitAudioSampleRate = 16000;
static int const kWitAudioBitDepth = 16;

License

The license for wit-ios can be found in LICENSE file in the root directory of this source tree.

wit-ios's People

Contributors

blandinw avatar catacola avatar deet avatar dsposito avatar enceradeira avatar hactar avatar jasonhotsauce avatar jeroenvollenbrock avatar jtliao avatar klintan avatar lasryaric avatar martinraison avatar mm avatar oliviervaussy avatar rbrick avatar readmecritic avatar spencerp avatar stopachka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wit-ios's Issues

Why are you using AFNetworking if all you need is one method/class.

It would seem a lot more compatible when making a framework SDK to limit or avoid any external dependencies. Everything you SDK is doing can be done without AFNetworking or ReactiveCocoa. This will ease the pain on developers wanting to consume your library and limit the amount of work you will have to do to maintain the SDK.

simple way to mark requests?

I am trying to use mark my messages using the "msg_id" parameter that can be sent to wit and wit will send it back again. (see: https://wit.ai/docs/http/20141022#get-intent-via-speech-link).

I can see that in WitUploader.m --> startRequestWithContext, the url being formed would only contain the "context" (if it wasn't nil), is there something am I missing? Or is there just an easier way to "mark" my messages?

Getting a lot of what looks to be JSON parsing errors

I am seeing a bunch of these errors now in the console when running my app:

error: The operation couldn’t be completed. (Cocoa error 3840.)

A little digging on StackOverflow and it looks like some type of JSON parsing error. Any ideas?

Wit recognise a wrong language

Hi,

We are facing a problem related to our language, some time ago we were able to speak in portuguese and receive the correct text but now we are receiving a german text that can not been understand by our specific algorithm.

Are we doing something wrong in this current version (4.2.0) ?

FallBack intent

Hi...

is there any default fallback intent in wit.ai like api.ai
What i want here : When no match found from entities fallback intent should response.

Context with state, fall back is not working

Hi Guys,

Is there anybody who use the "state" context parameter, and that works as described?

I'm working on a project where I have 5 intents now. I want the user to be able to change an entity in the previous intent or switch to a new topic. As I read in the docs this "state" context param is just what I need. Here is an example:

First user says:
"I’d like to have a big pepperoni pizza with tomato” (—> this fires "order_pizza" intent)

And than next comes A or B:
A ---> "And with cheese?” (—> "order_pizza" again)
B ---> "Send 6 dollars to Peter.” (—> "pay")

I copy here what I found in the documentation:
https://wit.ai/docs/console/complete-guide#advanced-topics-link
"If the query text or audio does not match these at all though, Wit.ai will fall back on normal intents, so that users are never trapped in a state (you don’t want to reinvent the IVR nightmare). Note that if an intent has a state, it won’t be activated for stateless queries (i.e. queries that don’t indicate a state in their context field)."

I have defined two states:
"all_intents" : I added this state to all the intents
"order_pizza": This added to my order_pizza intent

If I call wit with added only the order_pizza state wit won't understand the pay intent at all:
https://api.wit.ai/message?context={"state":["order_pizza"]}&q=Send 6 dollars to Peter
[...]"outcomes": [
{
"_text": "Send 6 dollars to Peter",
"confidence": 0,
"intent": "order_pizza", [...]

If I add both order_pizza and all_intents, and the text is again about ordering pizza, I will get the same answer when I add only all_intents:
https://api.wit.ai/message?context={"state":["order_pizza","all_intents"]}&q=And with cheese?
[...]"outcomes": [
{
"_text": "And with cheese?",
"confidence": 0.325,
"intent": "order_pizza", [...]
https://api.wit.ai/message?context={"state":["all_intents"]}&q=And with cheese?
[...]"outcomes": [
{
"_text": "And with cheese?",
"confidence": 0.325,
"intent": "order_pizza", [...]

What do I wrong?
Any help much appreciated.

Best,
Krisztian

WitMicButton unregistering a not registered observer

In WitMicButton.m at 302 line has a removeObserver function that tries to unregister a observer that was not registered in this version (4.2.0), when the ViewController was deallocated the app crashes.

Cannot remove an observer <WITMicButton 0x15e679ba0> for the key path "frame" from <WITRecorder 0x15e718530> because it is not registered as an observer.'

Add bitcode support

Can someone please recompile the library with bitcode support to avoid linker errors and ready for future compatibility in Xcode 7?

Power of two

I was looking at the VAD code and found this following code calculating samples_per_frame:

cvad_state->samples_per_frame = pow(ceil(log2(cvad_state->sample_freq/120)),2); //around 100 frames per second, but must be a power of two

Inline comment say value must be a power of two yet pow function is called with 2 as the exponent instead of base. If sample rate is 44100, samples_per_frame will be 81 using above code.

OTOH, following code which just reverses arguments to pow:

cvad_state->samples_per_frame = pow(2, ceil(log2(cvad_state->sample_freq/120)));

will set samples_per_frame to 512.

Does VAD code work at all?

Why include ReactiveCocoa if you are not using any part of it?

Static libraries that have similar dependencies with other 3rd party libraries make it extremely difficult for us to integrate with your SDK. My app uses AFNetworking and ReactiveCocoa but I can't compile my app when I'm using your framework due to linker errors. Make even less sense why your including a library that your not taking advantage of.

Cannot invoke 'start' with no arguments

Hi all,
I'm currently trying to implement the - (void)start; method as described in the iOS API reference using swift. I'm trying to call it inside of a swift class method with no parameters.
The docs indicate it is optional to do - (void)start: (id)customData;

The way I'm writing it is simply Wit.start(), which should work unless I'm missing something obvious. I've properly imported the Swift framework and Delegate and everything else Wit.ai-wise works perfectly. I can substitute the desired functionality with a WITMicButton and it will capture the audio, etc., but I can't call Wit.start().

I also get the same issue with Wit.stop().

Does anybody have any ideas on how to fix it? Let me know any more information I might need to provide. Thanks in advance!

AddressSanitizer: heap-buffer-overflow in `frames_detector_cvad_most_dominant_freq`

Do you want to request a feature, report a bug, or ask a question about wit-ios?

Bug

What is the current behavior?

When WITVad vadSpeechFrame is called, AddressSanitizer detects the following condition (irrelevant stack and local information removed for clarity):

=================================================================
==55701==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x611000022ec0 at pc 0x0001104ef2e2 bp 0x7000067f89d0 sp 0x7000067f89c8
READ of size 4 at 0x611000022ec0 thread T2
    #0 0x1104ef2e1 in frames_detector_cvad_most_dominant_freq 
    #1 0x1104ee31d in wvs_cvad_detect_talking 
    #2 0x1104eca9e in -[WITVad vadSpeechFrame:] 

0x611000022ec0 is located 0 bytes to the right of 256-byte region [0x611000022dc0,0x611000022ec0)
allocated by thread T2 here:
    #0 0x10f60d553 in wrap_malloc 
    #1 0x1104ed913 in -[WITVad get_fft:] 
    #2 0x1104eca2f in -[WITVad vadSpeechFrame:] 

SUMMARY: AddressSanitizer: heap-buffer-overflow  in frames_detector_cvad_most_dominant_freq
Shadow bytes around the buggy address:
  0x1c2200004580: fd fd fd fd fa fa fa fa fa fa fa fa fa fa fa fa
  0x1c2200004590: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x1c22000045a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
  0x1c22000045b0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x1c22000045c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x1c22000045d0: 00 00 00 00 00 00 00 00[fa]fa fa fa fa fa fa fa
  0x1c22000045e0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x1c22000045f0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x1c2200004600: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x1c2200004610: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x1c2200004620: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc

the length of samples when vadSpeechFrame is called is 342, and frames_detector_cvad_most_dominant_freq has fftMags at 25907.1777 with i as 64.

If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem.

Run the following branch with AddressSanitizer enabled: spokestack/spokestack-ios#45. iPhone XR simulator running iOS 12.2.

What is the expected behavior?

No memory error during VAD audio processing

If applicable, what is the App ID where you are experiencing this issue? If you do not provide this, we cannot help.

N/A

Comment on VAD

I haven't had time to read the original paper but based on some live tests, implementation appears to be much better at detecting end of voice activity than its start. In particular it seems more sensitive to certain starting words like "OK Google" and less sensitive to words like "It".

My current opinion is that, if implementation correctly reflects the algorithm, it's best used only to detect end of speech, using other means like touch/press to start capturing voice which isn't too bad for voice command application but less than ideal for VOIP.

Thoughts?

Start function in Swift throwing error

Calling Wit.sharedInstance().start() in Swift triggers the following error:

 *** Terminating app due to uncaught exception 'Invalid AVAudioSession state', reason: 'You should call setCategory and setActive, please see the documentation.'

I installed using CocoaPods... Not really sure what the problem is.

Voice Activity Detection not implemented in iOS 10?

I've tried implementing this SDK in an iOS 10 app, and it seems like the speech recognition is not really doing any voice activity detection any longer... is that right?

In particular, the WITSFSpeechRecordingSession class never attempts to detect the end of the speech, a number of the delegate methods are never called. The _vadEnabled property is never queried...

Am I missing something, or is this just sitting in the too hard basket at the moment?

[Wit] error: The request timed out. (code: -1001)

I'm getting this error when using the SDK. It seems to be related to the length of the recording ( about ~ 15 seconds ). I've set the vadTimeout to 15000.

[Wit sharedInstance].vadTimeout = 15000;

The error is happening here
The localizedDescription is:

Got connection error: The request timed out.

Has anyone seen anything like this? I'm not sure what's going on. I've reached out to Wit support as well.

Error when enqueuing buffer from callback: -66632

Hi all,

After Wit logs "Starting.............." when the start method is called on the singleton instance accessor, I get a list of typically between 3-6 lines of the following:
2015-07-26 12:12:28.052 TheDiary[384:32483] Error when enqueuing buffer from callback: -66632
2015-07-26 12:12:28.052 TheDiary[384:32483] Error when enqueuing buffer from callback: -66632
2015-07-26 12:12:28.053 TheDiary[384:32483] Error when enqueuing buffer from callback: -66632
2015-07-26 12:12:28.053 TheDiary[384:32483] Error when enqueuing buffer from callback: -66632

All have the same timestamp and callback. My question is why I would be getting these errors and if its an error in implementation on my end, how can I fix it?

The error itself is logged in the following method of WitRecorder.m

#pragma mark - AudioQueue callbacks
static void audioQueueInputCallback(void* data,
                                AudioQueueRef q,
                                AudioQueueBufferRef buffer,
                                const AudioTimeStamp *ts,
                                UInt32 numberPacketDescriptions,
                                const AudioStreamPacketDescription *packetDescs) {
void * const bytes = buffer->mAudioData;
UInt32 size        = buffer->mAudioDataByteSize;
int err;

if (WIT_DEBUG) {
    debug(@"Audio chunk %u/%u", (unsigned int)size, (unsigned int)buffer->mAudioDataBytesCapacity);
}

if (size > 0) {
    NSData* audio = [NSData dataWithBytes:bytes length:size];
    @autoreleasepool {
        WITRecorder* recorder = (__bridge WITRecorder*)data;
        [recorder.delegate recorderGotChunk:audio];
        if (recorder.vad != nil) {
            [recorder.vad gotAudioSamples:audio];
        }
    }
}
err = AudioQueueEnqueueBuffer(q, buffer, 0, NULL);
if (err) {
    NSLog(@"Error when enqueuing buffer from callback: %d", err);
    }
}

Perhaps this is a clue. It definitely has to do with the audio chunk and not the recognition of the audio itself. Maybe saving the audio chunk and sending it to the console?

Thanks for your help!

how to import wit.ai in ios

I am new to ios and I am planning to make a voice interactive app. For that I need wit.ai. How should I import this framework to my project?

arm7s support?

Does wit.ai support arm7s? When I put it into an IPA it does not let me support arm7s? What is the fix?

WITRecorder fails to start after the app comes back into the foreground.

I have been able to reproduce this in my own app and the wit-iso-eval application.

If the app is opened after being in the background, the first time that Wit.sharedInstance().start() is called, the recorder fails to start when calling "AudioQueueStart" (line 98 WITRecorder.m). There is no indication that it failed other than in the logs and the WITMicButton shows that it is not recording audio.

Visually, you can see the red recording bar flash on the screen and go away.

An Idea

Is there a plan in the future to enable wits speech api to allow live recognition so the user could see what they are saying as they are saying it? Similar to that of ok google. Thanks!

How to access intent value in Swift

func witDidGraspIntent(_ outcomes: [Any]!, messageId: String!, customData: Any!, error e: Error!) {
  let outcome = outcomes.first
  // how do I get the value at path "outcome.entities.intent.value" ?
}

Request timed out

Hi i am getting this issue when building for ios

I've managed to get things working sort of but i'm getting an issue to do with timeout, is there any way around this?

[DEBUG] Firing app event: orientationChange
[DEBUG] Starting......................
[DEBUG] Firing app event: orientationChange
[DEBUG] Stopping......................
[ERROR] when enqueuing buffer from callback: -66632
[ERROR] when enqueuing buffer from callback: -66632
[ERROR] when enqueuing buffer from callback: -66632
[ERROR] when enqueuing buffer from callback: -66632
[ERROR] when enqueuing buffer from callback: -66632

[DEBUG] Could not successfully update network info during initialization.
[DEBUG] [Wit] error: The request timed out.
[DEBUG] Wit stopped recording because of a (network?) error
[INFO] witFailedToRespondCorrectly
[INFO] Clean WITRecorder
[INFO] Clean WITVad

Framework file missing armv7s?

In my Application I have to limit to only support armv7 and not building on the current architecture. Does the framework you build take into account the available architectures/slices? I have used https://github.com/jverkoey/iOS-Framework#walkthrough to follow creating a distributable framework that will include all the architectures I would expect you to support (armv7, armv7s). arm64 is not yet available when you use ReactiveCocoa.

WitTests target is broken

Currently looking at 11 errors on build for the WitTests target, xcode 5.0

Tests should be updated for the current API.

Speech to Text longer than 10 seconds

myspeech.zip

We noted that there are problems with the "conversion service - speech to text" when the audio file are longer than 10 seconds.

Enclosed (myspeech.zip file) you can find two simple WAV files that we use to understand the process to communicate with Wit service and to perform some tests.

The first one have a duration of 8 seconds, and with this we have not met any kind of issue, here below you can find the output we received from your server:
MacBook-Pro-di-***:~ ******$ cd '/Progetti Personali/Phyton Tutorial/' && '/usr/local/bin/pythonw' '/Progetti Personali/Phyton Tutorial/Recognize.py' && echo Exit status: $? && exit 1
{u'entities': {}, u'msg_id': u'1SMUC8cFg6VWTGtQ0', u'_text': u'prova prova Proviamo questa registrazione per vedere se funziona Arrivederci e grazie E quanto cazz'}

The second one has instead a duration of 18 seconds, and with this audio we always receive this error alert:
MacBook-Pro-di-***:~ ******$ cd '/Progetti Personali/Phyton Tutorial/' && '/usr/local/bin/pythonw' '/Progetti Personali/Phyton Tutorial/Recognize.py' && echo Exit status: $? && exit 1
{u'code': u'wit', u'error': u"Something went wrong. We've been notified."}

Could someone kindly help us?
P.s. Of course we are totally available to share others information if needed.
Many thanks.

Can't cancel voice recording

Can't cancel voice recording and reject the record.
The STOP isn't good for our goal, because it sends the audio to processing.

We wan't to cancel the operation if the user taps the record and re-tap in less than few seconds.
The Wit "assistant" in our project is in a separated screen and if we tapped recording BUT the user change her mind and press the BACK button, we won't send anything.

So it's important for us.

arm64 support

Am I doing something wrong or is arm64 not supported. I am getting these errors:

Undefined symbols for architecture arm64:
[TRACE] "_vDSP_zvabs", referenced from:
[TRACE] -[WITVad get_fft:] in libcom.firstutility.wit.a(WITVad.o)
[TRACE] "_vDSP_ctoz", referenced from:
[TRACE] -[WITVad get_fft:] in libcom.firstutility.wit.a(WITVad.o)
[TRACE] "_vDSP_fft_zrip", referenced from:
[TRACE] -[WITVad get_fft:] in libcom.firstutility.wit.a(WITVad.o)
[TRACE] "_vDSP_create_fftsetup", referenced from:
[TRACE] -[WITVad init] in libcom.firstutility.wit.a(WITVad.o)

Canceling instead of Stop()

How could one "cancel" the witSDK instead of using the Stop() function which looks for an intent and calls the rest of the api calls? Our app needs to be able to stop the witRecorder (i'm assuming) without sending a request to the wit api.
Does this make some sense? Anyone have any suggestions or perhaps a method could be added to the sdk if needed?

Limit the use of CocoaPods or fix ReactiveCocoa

Using CocoaPods limits the number of developer who can or want to fix your library. I am in that camp since I think CocoaPods are un-necessary in the world of Git and Git submodules. Follow github's objective-c API's (OctoKit) and using Submodules and a script instead.

I am really noting this because I can not clone your repo and see the source because of this CocoaPod error:

"An error occurred while processing the pre-install hook of ReactiveCocoa/Core (1.3.1).

invalid byte sequence in US-ASCII"

Podspec Version is broken

The Podspec is listed as being 4.2.0, but still points to the commit tagged 4.1.0. It doesn't work.

Pod::Spec.new do |s|
  s.name         = "Wit"
  s.version      = "4.2.0"

...

  s.source       = { :git => "https://github.com/wit-ai/wit-ios-sdk.git", :tag => "4.1.0" }

Slow result times with speech recognition

Getting a speech to text result from the backend has increased significantly. I am measuring about 6-7 seconds for a given query vs. the 1-2 second query response time from the Nuance SDK. Any ideas why? Does the SDK stream the audio real time as its recording it? Looks like it is, so I cannot figure out why the results would be so slow...

unrecognized selector error

I have my own button instead of wit button. when line 1 is executed the program crashes.

  • (void) micTapped:(id)sender {

    [[Wit sharedInstance] toggleCaptureVoiceIntent: self]; //line 1

}

The complete error is
WITVad init
[HelloWorldLayer sessionDidStart:]: unrecognized selector sent to instance 0x15f509270
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[HelloWorldLayer sessionDidStart:]: unrecognized selector sent to instance 0x14f611450'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.