picovoice / porcupine Goto Github PK
View Code? Open in Web Editor NEWOn-device wake word detection powered by deep learning
Home Page: https://picovoice.ai/
License: Apache License 2.0
On-device wake word detection powered by deep learning
Home Page: https://picovoice.ai/
License: Apache License 2.0
I have problem with how to import porcupine to my python script. I had many problems with it, that i said that it could not import the library and so on, but that is fixed.
My code:
import sys
import soundfile
import os
import pyaudio
sys.path.append(r'E:/Desktop/Python/WakeWord/Porcupine/binding/python/')
from porcupine import Porcupine
library_path = 'Porcupine/lib/windows/amd64/libpv_porcupine.dll'
model_file_path = 'Porcupine/lib/common/porcupine_params.pv'
keyword_file_paths = 'Porcupine/resources/keyword_files/alexa_windows.ppn'
sensitivities = [0.5]
handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities)
def get_next_audio_frame():
pass
while True:
pcm = get_next_audio_frame()
keyword_index = handle.process(pcm)
if keyword_index >= 0:
# detection event logic/callback
pass
And my error:
Traceback (most recent call last): File "program.py", line 15, in <module> handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities) File "E:/Desktop/Python/WakeWord/Porcupine/binding/python\porcupine.py", line 84, in __init__ raise ValueError("Different number of sensitivity and keyword file path parameters are provided.") ValueError: Different number of sensitivity and keyword file path parameters are provided.
HI,
[ERROR] could not find the pronunciation for 'trello'.
Is it possible to add this word to the dictionary?
I've been trying for the past 5 hours. Any tutorial on how to import it?
The python demo keeps running until I quit it.
The demo crashed with following error:
Traceback (most recent call last):
File "demo/python/porcupine_demo.py", line 207, in <module>
input_device_index=args.input_audio_device_index).run()
File "demo/python/porcupine_demo.py", line 104, in run
pcm = audio_stream.read(porcupine.frame_length)
File "/usr/lib/python2.7/dist-packages/pyaudio.py", line 608, in read
return pa.read_stream(self._stream, num_frames, exception_on_overflow)
IOError: [Errno -9981] Input overflowed
Start the demo on a Rasperry PI Zero with ReSpeaker 2-Mic-HAT with following settings:
python demo/python/porcupine_demo.py --keyword_file_paths ./resources/keyword_files/grasshopper_raspberrypi.ppn --library_path ./lib/raspberry-pi/arm11/libpv_porcupine.so --input_audio_device_index 2
OS is Raspbian Stretch Lite.
Hey, I wanted to use this in a c++ program, I can include the headers but have no idea how to link the dll.
I've tried to look how to link it using CMake for a couple of days now with no success.
I get
C:/Users/tatan/CLionProjects/TreeVoiceAssistant/HotwordDetection.cpp:27: undefined reference to
pv_porcupine_process(pv_porcupine_object*, short const*, bool*)'`
every single time.
I'm using CLion as you can probably see, with MINGW and I can sucessfuly include the headers, but dlls just won't.
Please help
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources/ -w "ok google" -p raspberrypi -o .
outputs a .ppn file
[WARN] This version of optimizer cannot create keyword files for raspberrypi. Please contact [email protected].
$ tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources/ -w "ok google" -p raspberrypi -o .
Please include the operating system and CPU architecture. When applicable, provide relevant audio data.
Train optimizer with "homie"
[ERROR] could not find the pronunciation for 'homie'
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources -w homie -p mac -o my_models
Please include enough details so that the issue can reproduced independently by the resolver.
I tried to first make a wake word. By using your optimizer, and it worked. And then I imported it. But this time it says that a object is a NoneType... I don't know what that means, but I google it and find out it maybe a empty variable. I don't know the cuase of this problem, so hope you guys know it. My code is this:
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), 'Porcupine/binding/python/'))
from porcupine import Porcupine
library_path = "Porcupine/lib/windows/amd64/libpv_porcupine.dll"
model_file_path = "Porcupine/lib/common/porcupine_params.pv"
keyword_file_paths = ['Porcupine/jarvis_windows.ppn']
sensitivities = [0.5]
handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities)
def get_next_audio_frame():
pass
while True:
pcm = get_next_audio_frame()
keyword_index = handle.process(pcm)
if keyword_index >= 0:
# detection event logic/callback
pass
handle.delete()`
And the error im getting:
`Traceback (most recent call last):
File "program.py", line 19, in <module>
keyword_index = handle.process(pcm)
File "Porcupine/binding/python\porcupine.py", line 154, in process
status = self.process_func(self._handle, (c_short * len(pcm))(*pcm), byref(result))
TypeError: object of type 'NoneType' has no len()
I also gave you guys a picture of my file dir. So you can see where the files is.
Hope you guys can fix it... Just contact me over this issue, or via my email :)
so i created a ppn file for mac x86_64
"hey janet"
even with the sensitivities set to [0.1] it picks up odd words when i say things like
"hey jant"
"hey jan Bert" (and i really make sure the Bert sound comes out strong)
i even tried setting the sen to [0.00001] and it still fires with odd sounds
i would have thought the sensitivity closer to 1 would false positive lots.. but even 0.001 it false positives with close sounding words.. but "jan BERT" really has a B sound which i would have expected with a low sensitivity to not accept
I expect to see a positive result when saying the wake word when running the demo on Windows.
No result is reported.
on a windows machine run:
python demo/python/porcupine_demo.py --keyword_file_paths alexa_windows.ppn
Please include the operating system and CPU architecture. When applicable, provide relevant audio data. Windows 10, Intel Core i7
Install instructions needed.
none
I want to install it on a RPI but cannot see any instructions on how to do so
Please include the operating system and CPU architecture. When applicable, provide relevant audio data.
RPI 3 B+ running debian stretch
I downloaded project .zip file from github, and tried to execute demo/andriod/app in Android Studio.
However, Gradle project sync failed.
Please help me!
Error messages are below
Unable to resolve dependency for ':porcupinemanager@debug/compileClasspath': Could not resolve project :porcupine.
Unable to resolve dependency for ':porcupinemanager@debugAndroidTest/compileClasspath': Could not resolve project :porcupine.
Unable to resolve dependency for ':porcupinemanager@debugUnitTest/compileClasspath': Could not resolve project :porcupine.
Unable to resolve dependency for ':porcupinemanager@release/compileClasspath': Could not resolve project :porcupine.
Unable to resolve dependency for ':porcupinemanager@releaseUnitTest/compileClasspath': Could not resolve project :porcupine.
download zip file and execute in Android Studio
Please include enough details so that the issue can reproduced independently by the resolver.
Hi Alireza,
We ran into a porcupine crash, when we tried creating multiple porcupine handles for processing multiple audio streams simultaneously.
Steps to reproduce:
The C/C++ Application spawns two threads and creates two porcupine handles(handle1 and handle2) for detecting keywords from two audio steams.
The first thread finishes detection for the first audio stream and frees the porcupine handle using the method pv_porcupine_delete(handle1);
But the second thread still continues feeding audio data to porcupine engine from second audio stream using handle2.
Here after the 3rd step, the application crashes with the following back-trace…
#######
6505== Invalid read of size 4
6505== at 0x40666A8: pv_sqrt (in /root/Arun/Porcupine-master/lib/linux/i386/libpv_porcupine.so)
6505== by 0x406439C: pv_specgram_compute (in /root/Arun/Porcupine-master/lib/linux/i386/libpv_porcupine.so)
6505== by 0x4063823: pv_mel_filter_bank_compute (in /root/Arun/Porcupine-master/lib/linux/i386/libpv_porcupine.so)
==6505== by 0x40619C4: pv_porcupine_multiple_keywords_process (in /root/Arun/Porcupine-master/lib/linux/i386/libpv_porcupine.so)
6505== by 0x404AB38: start_thread (in /lib/libpthread-2.12.so)
6505== by 0x428ED6D: clone (in /lib/libc-2.12.so)
######
Please note, the above crash doesn’t occur, when the second thread creates its porcupine handle after the first thread finishes the execution.
Could you please have a look at this issue and let us know the root cause and resolution?
Regards,
Arun
1) Is there any way to make Porcupine log more useful debugging info to LogCat in Android?
2) Does Porcupine internally detects beginning and end of speech in the audio data, similar to how PocketSphinx or Snowboy do this? And if it does, is there any way to get these states?
I'm currently integrating Porcupine into my Android app and the issue that I'm facing is that it doesn't detect anything.
Porcupine happily reports to logcat that it got initialized correctly with proper parameters:
06-19 19:08:21.378 11391-11808/ I/PORCUPINE: �[32m [INFO] model file path: /storage/emulated/0/Android/data/com.example.app/files/sync/porcupine/porcupine_params.pv�[0m
�[32m [INFO] number of keywords: 1�[0m
�[32m [INFO] keyword file path [0]: /storage/emulated/0/Android/data/com.example.app/files/sync/porcupine/models/alexa_android.ppn�[0m
�[32m [INFO] sensitivity [0]: 0.500000�[0m
Sample rate and frame length were correctly set up in according to what porcupine requires using Porcupine#getFrameLength()
and Porcupine#getSampleRate()
. Porcupine does not show any errors about sample rate or frame length when I feed it audio data.
Keyword: Alexa (other available keywords)
OS: Android
PS: here's, to better illustrate idea from the 2nd quesiton, an example of how Snowboy reports if the any speech was detected in current audio frame (taken from here):
// Snowboy hotword detection.
int result = detector.RunDetection(audioData, audioData.length);
if (result == -2) {
// post a higher CPU usage:
// sendMessage(MsgEnum.MSG_VAD_NOSPEECH, null);
} else if (result == -1) {
sendMessage(MsgEnum.MSG_ERROR, "Unknown Detection Error");
} else if (result == 0) {
// post a higher CPU usage:
// sendMessage(MsgEnum.MSG_VAD_SPEECH, null);
} else if (result > 0) {
sendMessage(MsgEnum.MSG_ACTIVE, null);
Log.i("Snowboy: ", "Hotword " + Integer.toString(result) + " detected!");
player.start();
}
Thanks.
As of now, I want to integrate Porcupine with Java through JNI, as you know this means that the dll must be built specifically for JNI Integration (through JNI Wrappers)
JNIEXPORT void JNICALL Java_package_name_ClassName_methodName(JNIEnv* env, jobject thiz) {}
I have only found this wrappers in android's .so libs, but because I want to implement it in Java, I cannot implement it on a Windows PC. I am using an AMD64 arch. Is there any way I can build this on my own? If not, could you provide me the modified dll?
is the min frame length for porcupine 512??
using Python3.6 for testing
i have an audio file with a frame length of 256.. and when running porcupine.process on this it never picks up the hotword
the audio stream i am using is
ChunkID= b'RIFF'
TotalSize= 556
DataSize= 512
Format= b'WAVE'
SubChunk1ID= b'fmt '
SubChunk1Size= 16
AudioFormat= 1
NumChannels= 1
SampleRate= 16000
ByteRate= 32000
BlockAlign= 2
BitsPerSample= 16
SubChunk2ID= b'data'
SubChunk2Size= 512
Hi,
Can I initialise and use porcupine in a background service that listens for the wake word continuously?
A generated ppn file with the request keyword
[WARN] This version of optimizer cannot create keyword files for ios. Please contact [email protected].
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources/ -w "ok google" -p ios -o ~/
Running on macOS High Sierra 10.13.4
I'm trying to implement this library in Android.
Here's my code:
onCreate()
copyPorcupineConfigFiles(this);
String keywordFilePath = new File(this.getFilesDir(), "francesca.ppn")
.getAbsolutePath();
String modelFilePath = new File(this.getFilesDir(), "params.pv").getAbsolutePath();
try {
manager = new PorcupineManager(modelFilePath, keywordFilePath, sensitivity, new KeywordCallback() {
@Override
public void run(int keyword_index) {
Toast.makeText(MainActivity.this, "Detected!", Toast.LENGTH_LONG).show();
}
});
manager.start();
} catch (PorcupineManagerException e) {
e.printStackTrace();
}
copyPorcupineConfigFiles()
private static void copyPorcupineConfigFiles(Context context) {
int[] resIds = {R.raw.francesca, R.raw.params};
Resources resources = context.getResources();
for (int resId : resIds) {
String filename = resources.getResourceEntryName(resId);
String fileExtension = resId == R.raw.params ? ".pv" : ".ppn";
InputStream is = null;
OutputStream os = null;
try {
is = new BufferedInputStream(resources.openRawResource(resId),
256);
os = new BufferedOutputStream(context.openFileOutput(filename + fileExtension,
Context.MODE_PRIVATE), 256);
int r;
while ((r = is.read()) != -1) {
os.write(r);
}
os.flush();
} catch (IOException e) {
Toast.makeText(context, "Error!", Toast.LENGTH_SHORT).show();
} finally {
try {
if (is != null) {
is.close();
}
if (os != null) {
os.close();
}
} catch (IOException e) {
Toast.makeText(context, "Error!", Toast.LENGTH_SHORT).show();
}
}
}
}
keywordFilePath
is /data/user/0/com.package.name/files/francesca.ppn
modelFilePath
is /data/user/0/com.package.name/files/params.pv
Both paths are correct.
When I run, I get this Porcupine error:
07-30 17:36:51.343 18194-18194/? I/PORCUPINE: [ERROR] loading parameter file failed with 'IO_ERROR'
07-30 17:36:51.345 18194-18194/? W/System.err: ai.picovoice.porcupinemanager.PorcupineManagerException: ai.picovoice.porcupine.PorcupineException: java.io.IOException: Initialization of Porcupine failed.
Installing on Raspberry PI Zero W with Raspbian, demo runs for about 1-2 seconds and throws:
Traceback (most recent call last): File "demo/python/porcupine_demo.py", line 204, in <module> input_device_index=args.input_audio_device_index).run() File "demo/python/porcupine_demo.py", line 104, in run pcm = audio_stream.read(porcupine.frame_length) File "/usr/local/lib/python2.7/dist-packages/pyaudio.py", line 608, in read return pa.read_stream(self._stream, num_frames, exception_on_overflow) IOError: [Errno -9981] Input overflowed
Maybe I have to change the frame size? If so, where do I change parameters?
I tested using python demo/python/porcupine_demo.py --keyword_file_paths resources/keyword_files/alexa_raspberrypi.ppn --output_path ~/testporcupine.wav
it records the file for 2 seconds and stops. I can hear my voice fine in those 2 sec.
ai.picovoice.porcupinemanager.PorcupineManagerException: ai.picovoice.porcupine.PorcupineException: java.lang.IllegalArgumentException: Initialization of Porcupine failed.
String keywordFilePath = new File(this.getFilesDir(), filename + ".ppn")
.getAbsolutePath();
String modelFilePath = new File(this.getFilesDir(), "params.pv").getAbsolutePath();
try {
manager = new PorcupineManager(modelFilePath, keywordFilePath, sensitivity, new KeywordCallback() {
@Override
public void run(int keyword_index) {
Toast.makeText(MainActivity.this, "Detected!", Toast.LENGTH_LONG).show();
}
});
manager.start();
} catch (PorcupineManagerException e) {
e.printStackTrace();
Log.e("Porcupine",e.getMessage());
}
I have manually checked if keywordFilePath and modelFilePath are correct, and they are. This also happens in your demo application.
We want to install Picovoice service in Asus Tinker Board (https://en.wikipedia.org/wiki/Asus_Tinker_Board)
We have arm71 machine.
But we've got an axception "Cannot autodetect the binary type. Please enter the path to the shared object using --library_path command line argument."
What library do we need ?
Can we run Picovoice service on this board at all ?
Thanks
Use porcupine in C/C++ Android NDK native program.
"undefined reference" to every porcupine function.
Write a C program;
include porcupine C headers;
call API C functions;
link against libpv_porcupine.so found in android arm_v7a folder;
build with android NDK v15c.
Is there any way to detect 3 words in a row? please? can the model be expended a bit?
python demo/python/porcupine_demo.py --keyword_file_paths resources/keyword_files/alexa_linux.ppn
Demo runs.
Error:
Traceback (most recent call last):
File "demo/python/porcupine_demo.py", line 210, in <module>
input_device_index=args.input_audio_device_index).run()
File "demo/python/porcupine_demo.py", line 95, in run
sensitivities=[self._sensitivity] * num_keywords)
File "demo/python/../../binding/python/porcupine.py", line 69, in __init__
library = cdll.LoadLibrary(library_path)
File "/usr/local/conda3/lib/python3.6/ctypes/__init__.py", line 426, in LoadLibrary
return self._dlltype(name)
File "/usr/local/conda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: demo/python/../../lib/linux/x86_64/libpv_porcupine.so: failed to map segment from shared object
Environment:
ChromeOS on Acer CXI2
Linux localhost 3.14.0 #1 SMP PREEMPT Fri Jun 22 17:20:26 PDT 2018 x86_64 Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz GenuineIntel GNU/Linux
Steps to reproduce:
Commented out soundfile dependency because libsndfile is not installed in the ChromeOS linux.
Start terminal with ChromeOS (Ctrl + Alt + T)
Clone Porcupine repo.
Installed portaudio, pyaudio.
Run python: python demo/python/porcupine_demo.py --keyword_file_paths resources/keyword_files/alexa_linux.ppn
I know ChromeOS Linux is not the typical use case. Looking for any guidance on where this error could stem from since failed to map segment from shared object
is not specific for me.
Hi I am using Porcupine on Linux (Ubuntu 64-bit) for making a wake-up word "dayo". But I could not do that. I am getting error mesage like [ERROR] could not find the pronunciation for 'dayo'. If this is not a typo please contact [email protected].
Can you please add the word to Porcupine to make the word as wakeup keyword or can you suggest a procedure to create a custom wakeup word like I used above(dayo)?
The 'dayo' is pronounced as: https://www.howtopronounce.com/dayo/
Hi,
Trying to build my custom wake word , for mac
I am running the following command :
./pv_porcupine_optimizer -r resources -w google -p mac -o ~/
./pv_porcupine_optimizer -r resources -w yellow -p mac -o ~/
./pv_porcupine_optimizer -r resources -w 'action item' -p mac -o ~/
./pv_porcupine_optimizer -r resources -w 'take note' -p mac -o ~/
getting the following:
[ERROR] Could not find a pronunciation for 'google'. If this is not a typo please contact [email protected].
./pv_porcupine_optimizer -r resources -w google -p mac -o ~/
Hello!
I tried to use the optimizer with paired keywords like "ok" + the name of a french brand but without success. Typically, "OK " as a wake-word.
Is there any way to extend the pronunciation directory ? For example by providing audio data samples of french users saying this brand name?
Best,
Denis
Can you add the keyword "lifetouch", thank you in advance. Great software.
Hi, and thanks for your great product
My problem is that When i try to run
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources/ -p mac -o . -w "hey rimon"
I will get the following error:
[ERROR] could not find the pronunciation for 'hey rimon'
When I've read the docs, it was mentioned that only the words in english are supported.
I wanted to know is there anyway that I can add the pronunciation to the dictionary so I will be able to get my costume keyword wake up model?
Thanks
When embedded for iOS project, it should be compilable for both deploying to device and simulator.
It'll only compile if the target is to actual device.
Compile the demo project on simulator, it will fail due to the binary was built only for device architecture.
I think it's probably worth it to provide a universal binary for iOS, otherwise it'll mean that adding this library to the project will cause that the project no longer compilable for simulator. The barrier of entry to adopt this library will be really high.
"Below is a quick demonstration of how to construct an instance of it to detect multiple keywords concurrently"
I'm a beginner in code, so I'm not sure how this works. Do I edit the python binding code? and when I call it in my command prompt do I still need the keyword_file_path header?
I was looking at the wakeword-benchmark utility and I was wondering what data was used to train the alexa_<PLATFORM>.ppn models.
You might consider reducing size to speed up downloads
Hello,
In Python, we can do multiple wake word detection as documented below:
library_path = ... # Path to Porcupine's C library available under lib/${SYSTEM}/${MACHINE}/
model_file_path = ... # It is available at lib/common/porcupine_params.pv
keyword_file_paths = ['path/to/keyword/1', 'path/to/keyword/2', ...]
sensitivities = [0.5, 0.4, ...]
handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities)
How do we do the same on Android? Can someone help?
I tried to run the demo command like this:
python demo/python/porcupine_demo.py --keyword_file_paths resources/keyword_files/blueberry_windows.ppn
But i got this error:
Traceback (most recent call last):
File "demo/python/porcupine_demo.py", line 204, in <module>
input_device_index=args.input_audio_device_index).run()
File "demo/python/porcupine_demo.py", line 92, in run
sensitivities=[self._sensitivity] * num_keywords)
File "demo/python\../../binding/python\porcupine.py", line 69, in __init__
library = cdll.LoadLibrary(library_path)
File "C:\Python\lib\ctypes\__init__.py", line 426, in LoadLibrary
return self._dlltype(name)
File "C:\Python\lib\ctypes\__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
What can i do to fix this problem
Archive without failing.
When we Archive the build (iOS demo), it fails with the following error:
ld: bitcode bundle could not be generated because '/Users/userXXXX/Code/vivaa/native/Scribe/ios/Porcupine/watchos/libpv_porcupine.a(pv_porcupine.o)' was built without full bitcode. All object files and libraries for bitcode must be generated from Xcode Archive or Install build for architecture armv7k
Please include the operating system and CPU architecture. When applicable, provide relevant audio data.
Happens with iOS and watchOS demos.
Hello,
First of all, thank you for this engine, it's awesome!
I've tried to create a small wake up sentence using the optimizer command line too, like the famous "ok google" but it seems not to be supported.
Command:
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources/ -w "ok test" -p mac -o ~/
Output:
[ERROR] Could not find a pronunciation for 'ok test'. If this is not a typo please contact [email protected].
On my real use case the trigger command will be "Ok + 2 small words"
Is this the expected behavior of Porcupine? Which means Porcupine is designed exclusively for single words detection? If so, any workaround?
Or did I miss something on the optimizer tool?
Thank you.
Is there a way to get the detected audio in the callback?
It would be really nice to have the callback include the audio frames so e.g. cloud-based wake word verification for the Alexa Voice Service can be used.
I'm running Visual Studio 2017 on a Windows 10 machine.
I've tried this with both 32- and 64-bit projects.
The result I get is similar in each case:
4>..\..\porcupine\lib\libpv_porcupine.dll : fatal error LNK1107: invalid or corrupt file: cannot read at 0x430
(64-bit)
1>..\..\porcupine\lib\libpv_porcupine.dll : fatal error LNK1107: invalid or corrupt file: cannot read at 0x3F8
(32-bit)
I also tried previous committed versions with no luck.
I have Ubuntu 16.04, Intel Core i3 7th Gen.
As a result of running service I got error.
Error log:
Traceback (most recent call last):
File "demo/python/porcupine_demo.py", line 207, in
input_device_index=args.input_audio_device_index).run()
File "demo/python/porcupine_demo.py", line 92, in run
sensitivities=[self._sensitivity] * num_keywords)
File "demo/python/../../binding/python/porcupine.py", line 114, in init
byref(self._handle))
File "/home/scale/.local/lib/python2.7/site-packages/enum.py", line 199, in init
raise EnumBadKeyError(key)
enum.EnumBadKeyError: Enumeration keys must be strings: 0
Can I customize a Chinese wakeup keyword?
Please include the operating system and CPU architecture. When applicable, provide relevant audio data.
Hi, I'm making some prototype and try to make my keyword like below.
But it's not work with pronunciation error.
Can you suggest the way to resolve this problem or add a keyword "ariot" (not "a riot")??
Please advice about it.
Thanks.
C:\Users\1\Porcupine-master\Porcupine-master>tools\optimizer\windows\i686\pv_porcupine_optimizer.exe -r resources\ -w "ariot" -p windows -o .
[ERROR] could not find the pronunciation for 'ariot'. If this is not a typo please contact [email protected].
I'm trying to integrate Porcupine into an xcode project as per Porcupine/demo/ios/PorcupineDemo.
PorcupineDemo runs perfectly, so I copied the necessary components and settings of the demo to my project, but alas, it does not compile.
Ld [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Products/Debug-iphonesimulator/Porcupine\ Test.app/Porcupine\ Test normal x86_64
cd "[~]/Library/Autosave Information/Porcupine Test"
export IPHONEOS_DEPLOYMENT_TARGET=11.3
export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator11.3.sdk -L[~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Products/Debug-iphonesimulator -L[~]/Library/Autosave\ Information/Porcupine\ Test/lib/ios -F[~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Products/Debug-iphonesimulator -filelist[~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Intermediates.noindex/Porcupine\ Test.build/Debug-iphonesimulator/Porcupine\ Test.build/Objects-normal/x86_64/Porcupine\ Test.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=11.3 -dead_strip -Xlinker -object_path_lto -Xlinker [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Intermediates.noindex/Porcupine\ Test.build/Debug-iphonesimulator/Porcupine\ Test.build/Objects-normal/x86_64/Porcupine\ Test_lto.o -Xlinker -export_dynamic -Xlinker -no_deduplicate -Xlinker -objc_abi_version -Xlinker 2 -fobjc-link-runtime -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphonesimulator -Xlinker -add_ast_path -Xlinker [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Intermediates.noindex/Porcupine\ Test.build/Debug-iphonesimulator/Porcupine\ Test.build/Objects-normal/x86_64/Porcupine_Test.swiftmodule -Xlinker -sectcreate -Xlinker __TEXT -Xlinker __entitlements -Xlinker [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Intermediates.noindex/Porcupine\ Test.build/Debug-iphonesimulator/Porcupine\ Test.build/Porcupine\ Test.app-Simulated.xcent -lpv_porcupine -Xlinker -dependency_info -Xlinker [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Intermediates.noindex/Porcupine\ Test.build/Debug-iphonesimulator/Porcupine\ Test.build/Objects-normal/x86_64/Porcupine\ Test_dependency_info.dat -o [~]/Library/Developer/Xcode/DerivedData/Porcupine_Test-bauzordzwumanjdyzvkejvhihxcl/Build/Products/Debug-iphonesimulator/Porcupine\ Test.app/Porcupine\ Test
error: Invalid bitcode signature
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I) create new xcode project named 'Porcupine Test'
II) copy
Porcupine/include to ${PROJECT_DIR}/include
Porcupine/lib to ${PROJECT_DIR}/lib
Porcupine/binding/ios/PorcupineManager to ${PROJECT_DIR}/Porcupine Test/PorcupineManager
Porcupine/demo/ios/PorcupineDemo/module.modulemap to ${PROJECT_DIR}/Porcupine Test/Porcupine/module.modulemap
a) edit module.modulemap relative paths to '../../include'
III) set Build Settings
Search Paths -> Library Search Paths to ${PROJECT_DIR}/lib/ios
Swift Compiler - Search Paths -> Import Paths to ${PROJECT_DIR}/Porcupine\ Test/Porcupine
IV) add ${PROJECT_DIR}/lib/ios/libpv_porcupine.a to Build Phases -> Link Binary With Libraries
V) build project
xcode 9.3 (9E145)
macos 10.13.4
1.6 GHz Intel Core i5
I tried to compile the iOS demo, but i am getting errors while compiling
Error is:
Undefined symbols for architecture x86_64:
"_pv_porcupine_delete", referenced from:
PorcupineDemo.PorcupineManager.stop() -> () in PorcupineManager.o
"_pv_sample_rate", referenced from:
PorcupineDemo.PorcupineManager.start() throws -> () in PorcupineManager.o
"_pv_porcupine_frame_length", referenced from:
PorcupineDemo.PorcupineManager.start() throws -> () in PorcupineManager.o
"_pv_porcupine_multiple_keywords_init", referenced from:
PorcupineDemo.PorcupineManager.start() throws -> () in PorcupineManager.o
"_pv_porcupine_multiple_keywords_process", referenced from:
closure #1 (Swift.UnsafeMutableRawPointer?, Swift.OpaquePointer, Swift.UnsafeMutablePointer<__C.AudioQueueBuffer>, Swift.UnsafePointer<__C.AudioTimeStamp>, Swift.UInt32, Swift.UnsafePointer<__C.AudioStreamPacketDescription>?) -> () in variable initialization expression of PorcupineDemo.PorcupineManager.(audioCallback in _BEC5063E4C49B2A2811E8F4D93649EFF) : @convention(c) (Swift.UnsafeMutableRawPointer?, Swift.OpaquePointer, Swift.UnsafeMutablePointer<__C.AudioQueueBuffer>, Swift.UnsafePointer<__C.AudioTimeStamp>, Swift.UInt32, Swift.UnsafePointer<__C.AudioStreamPacketDescription>?) -> () in PorcupineManager.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Can you please help me to compile the demo build
so we can't built models for Raspberry Pi ? we can only buy one for the RPi ?
Hello
Thank you for the great work!
I have the following problem. When I run the following command:
tools\optimizer\windows\i686\pv_porcupine_optimizer -r resources/ -w cerner -p windows -o .
I get the following error:
[ERROR] Could not find the pronunciation for 'cerner'.
Is there a method to add this word to your vocabulary using the optimizer tool?
Thanks
Emins
I tried to integrate it into python but i got this error:
Traceback (most recent call last):
File "SpeechRecognition.py", line 14, in <module>
handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities)
NameError: name 'Porcupine' is not defined
And this is my code:
# WAKE WORD
library_path = ['WakeWord/lib/${SYSTEM}/${MACHINE}/']
model_file_path = ['WakeWord/lib/common/porcupine_params.pv']
keyword_file_paths = ['WakeWord/resources/keyword_files/alexa_windows.ppn']
sensitivities = [0.5, 0.4]
handle = Porcupine(library_path, model_file_path, keyword_file_paths=keyword_file_paths, sensitivities=sensitivities)
# WAKE WORD END
def get_next_audio_frame():
pass
while True:
pcm = get_next_audio_frame()
keyword_index = handle.process(pcm)
if keyword_index >= 0:
# detection event logic/callback
pass
(Just the default from github)
i tried creating a new wake word using the following command
tools/optimizer/mac/x86_64/pv_porcupine_optimizer -r resources -w “hello world” -p mac -o ~/Users/mahmoud/Desktop
when I use one word, example hello
it gives the pronunciation error "tried many other words", when i use two or more i get
[ERROR] invalid arguments
the test file is working fine, but i can't add any other words.
any ideas?
OS: Mac OS high sierra (10.13.4)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.