Coder Social home page Coder Social logo

diy-alexa's People

Contributors

cgreening avatar ramainen avatar rbegamer avatar wiltonlazary avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diy-alexa's Issues

Running into issues

Heya.

I tried to test deploy the repo to my ESP32 (Lolin D32) but I ran into several issues:

  • There was no .ino file so I renamed main.cpp to src.ino as Visual Studio Code (with Arduino plugin) and Arduino IDE both complained lack of .ino file, is this ok?
  • There were a lot of include errors and despite adding includePaths to c_cpp_propertier.json, it still didn't work so I added the header files onto src-folder, is this ok as well?
  • After doing so, I no longer got include errors but instead got this:
src:100:33: error: expected type-specifier before 'I2SMicSampler

 I2SSampler *i2s_sampler = new I2SMicSampler(i2s_mic_pins, false);

I can't seem to figure out what is the cause of this. I'm pretty sure I wasn't supposed to change those file names and locations but I couldn't get rid of the errors in any other means. Any tips?

_mar_sounds_ Directory

Can you write a few lines explanation about _mar_sounds_ directory?

If I choosed custom word "dinosaur", can I just put into this folder amount of wavs like "dikobrauz", "dundellion", "dinoland", "bulbasaur" and so on?
Are those files must be exactly 1 second * 16 kHz * 32 Bit?

Circuit Sketch Diagram Required

Greetings,

My project is somehow not working properly and i think its because of the circuit setup i may have done incorrectly.
Could you please provide a circuit sketch so that the connections become clear.

Thanks.

platform io (error)

image

I install the visual studio code and download the platform io extension but its home page didn't open, I was waited nearly 30 minutes to open it. What's the reason you think ??

Can you reuse pin for audio input and output ?

Hi, first I want to thank you for the series of tutorial videos on youtube.
If I configure input as left channel, and output as right channel, I wonder if I can share clock/word wires for both audio input and output? something like below

static const i2s_pin_config_t pin_config = {
    .bck_io_num = 4,
    .ws_io_num = 5,
    .data_out_num = 18,
    .data_in_num = 17,
};

Or, config pins separately, but use same pins for bck_io_num and ws_io_num :

// input
static const i2s_pin_config_t pin_config_0 = {
    .bck_io_num = 4,
    .ws_io_num = 5,
    .data_out_num = 18,
    .data_in_num = I2S_PIN_NO_CHANGE
};
i2s_set_pin(i2s_num_0, &pin_config_0);

// output
static const i2s_pin_config_t pin_config_1 = {
    .bck_io_num = 4,
    .ws_io_num = 5,
    .data_out_num = I2S_PIN_NO_CHANGE,
    .data_in_num = 17
};
i2s_set_pin(i2s_num_1, &pin_config_1);

frankie "whyengineer" fork of ARM's CMSIS for ESP32

Have no idea but soon as I saw it I thought oh that is interesting as https://github.com/UT2UH/ML-KWS-for-ESP32 is just an implementation of https://github.com/ARM-software/ML-KWS-for-MCU

Close off my 'issues' as they are really not just wondered like the beamforming stereo mic and using esp32s as a distributed array might be ideas of interest.

Thought I would post another and might be outdated now as this is the old 1.15 version of tensorflow but if you look at Accuracy of the models on validation set, their memory requirements and operations per inference in the table of the above.

The CRNN and DS-CNN architectures are really interesting and might be far better than a plain CNN if frankie "whyengineer" fork of ARM's CMSIS for ESP32 works as that is a collection of Arm boffin fast math where we don't have native libs to make the above examples work.

Maybe it not the fast math and just the driver pack but you will know with a faster glance than I or substitute math libs that do similar?

Custom Wake Words Recognition

testing.zip
Hello, I try to made my custom waking words using my own trained voice but it was not working.
the end result of generating the testing data shows empty results of words "testing"
image

  • I managed to get it run with any words from google command set audio file but no success if I try record my own words.

I did try Records 10 samples of words "testing" in audacity and export the results to 256 kb/s. 16.0kHz , 16 bits , 1 channel , PCM ( Little / Signed) .

The "Testing" folder was created inside the model\speech_data. and all of the wave files exported placed under "diy-alexa-master\model\speech_data\testing" folder.

In Generate Training Data.ipynb , I change the source as following :

1. Adding "Testing" to the words array.

#list of folders we want to process in the speech_data folder
from tensorflow.python.ops import gen_audio_ops as audio_ops
words = [
'backward',
'bed',
'bird',
'cat',
'dog',
'down',
'eight',
'five',
'follow',
'forward',
'four',
'go',
'happy',
'house',
'learn',
'left',
'marvin',
'nine',
'no',
'off',
'on',
'one',
'right',
'seven',
'sheila',
'six',
'stop',
'testing',
'three',
'tree',
'two',
'up',
'visual',
'wow',
'yes',
'zero',
'_background',
]

2. Replace the words "marvin" to word "testing" in the following code

#process all the words and all the files
for word in tqdm(words, desc="Processing words"):
if '_' not in word:
# add more examples of marvin to balance our training set
# repeat = 70 if word == 'marvin' else 1
repeat = 70 if word == 'testing' else 1
process_word(word, repeat=repeat)

print(len(train), len(test), len(validate))

3. Last , I added the following code to the end for testing

word_index = words.index("testing")

X_testing = np.array(X_train)[np.array(Y_train) == word_index]
Y_testing = np.array(Y_train)[np.array(Y_train) == word_index]
plot_images2(X_testing[:20], IMG_WIDTH, IMG_HEIGHT)
print(Y_testing[:20])

additional image for reference

image

mfcc improve accuracy several % over spectrogram

https://github.com/StuartIanNaylor/simple_audio_tensorflow

simple_audio.py is the mini command set and much quicker just to play with
simple_audio.py is the full command set

Both the above are spectrograms

simple_audio_mfcc_frame_length1024_frame_step512.py is just mfcc hacked into the same
You do get a decent accuracy improvement by mfcc alone over spectrogram.

simple_audio_prune.py just checks each wav against the model and deletes if under a threshold (start at .1 and work up as the model will change on each run as the worst is removed)
Think i will post a csv or json of the complete pruned full command set as it may take some time :)

Code functions

Screenshot_2021-12-14-21-06-52-82_40deb401b9ffe8e1df2f1cc5ba480b12

I my previous issue you tell me to understand the code , therefore i open this issue for get some idea of functions in code . Ok tell me why are you use tf micro folder and src folder in this code??

The Google command set has a lot of bad samples

The Google Command Set has approx 10% badly cut, trimmed and padded words. There are 2 versions of the command set and the specific one was Ver2.0 but presume both are the same in terms of bad samples.
I was playing with https://github.com/linto-ai/linto-desktoptools-hmg which allows you to test your trained data and play failures.
I was shocked how many bad audio files are in the command set and how much that can effect accuracy.
I was using Ver2.0 as said and used the word "visualise" as it has 3 synonyms or phoneme obviously Marvin has 2 but more is better.
With HMG I played back the false positives and negatives and practically they where all junk.
So I deleted them and reran many times and ended up deleting about 10% of "visualise" and a lot of random junk files.
After I did this my recognition accuracy improved massively and the false negatives/positives dropped really low.

The Linto HMG is again just tensorflow but the GUI is really good for capturing those false positives/negatives and listening to see if it is likely a bad sample.

"Hey Marvin" would been a far better as said the more phoneme and unique the better.
With Deepspeech or Kaldi you can output a transcript of word occurrence in a sample and and with sox guessing you could grab "hey" from somewhere and tack it onto "marvin" with a bit of code.
Apart from the Gooogle command set I don't know of another word dataset as they seem to be all ASR sentence datasets but with the code above again you could extract words after running transcript output from Deepspeech or Kaldi.
https://github.com/jim-schwoebel/voice_datasets

I am not sure adding large quantities of words in a much bigger dataset actually increases accuracy for the work entailed to actually making sure what you feed is good.
Really do suggest you give HMG or some other tool and delete the dross out of the Google Command Set as I think you will be surprised how much affect bad samples can have on results.

InvalidArgumentError: unknown file type: speech_data\backward\0165e0e8_nohash_0.wav [Op:IO>AudioReadableInit]

Hi there,

Firstly, what a great project and thank you for all the information you have provided!

I've been implementing my own version for the firmware, but am trying to use your jupyter notebook for preprocessing the dataset. I downloaded the same dataset, extracted the files using the same command and ran your notebook 'Generate Training Data.ipynb' but get the error:

InvalidArgumentError: unknown file type: speech_data\backward\0165e0e8_nohash_0.wav [Op:IO>AudioReadableInit]

Is there anything you could recommend to solve this or provide you with more information needed?

Thanks a lot!
Edward

Different I2S Connection at M5Stack Atom Echo...any chance to adapt this?

Hi,

I currently try to port your "diy-alexa" to an Atom Echon from M5Stack. Atom Echo because it it very small.
The I2S connection diagram of the Atom is simpler and I wonder if it fits to your diy-alexa. (I2S_MIC_LEFT_RIGHT_CLOCK is missing!)
image

What I changed was the pin mapping at config.h like below...but it doesn't work.

// Which channel is the I2S microphone on? I2S_CHANNEL_FMT_ONLY_LEFT or I2S_CHANNEL_FMT_ONLY_RIGHT
#define I2S_MIC_CHANNEL I2S_CHANNEL_FMT_ONLY_LEFT

#define I2S_MIC_SERIAL_CLOCK GPIO_NUM_33
#define I2S_MIC_SERIAL_DATA GPIO_NUM_23

// Analog Microphone Settings - ADC1_CHANNEL_7 is GPIO35
#define ADC_MIC_CHANNEL ADC1_CHANNEL_7

// speaker settings
#define I2S_SPEAKER_SERIAL_CLOCK GPIO_NUM_19
#define I2S_SPEAKER_LEFT_RIGHT_CLOCK GPIO_NUM_33
#define I2S_SPEAKER_SERIAL_DATA GPIO_NUM_22
...

Any Chance to get "diy-alexa" running with the Atom Echo?

Thanks in advance

Steve

DIY Alexa is not response

image

I correctly upload the codes but it's doesn't show any response when I am saying "Marvin" please help me I am waiting for your response I can't understand what is the reason for this please help me

DIY Alexa not working

image

I gave my wifi router SSID and password correctly but it doesn't connect to wifi, when esp 32 tries to connect to wifi, wifi router LEDs blink and the serial monitor says connection failed. please help me, please I am waiting for your reply friend

Requeriments install problem

Hi! I'm having this when i run: python3 -m pip install -r requirements.txt

Building wheels for collected packages: pyaudio, jupyter-nbextensions-configurator, jupyter-latex-envs
Building wheel for pyaudio (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/neverhags/Development/diy-alexa/model/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-unsihq35/pyaudio/setup.py'"'"'; file='"'"'/tmp/pip-install-unsihq35/pyaudio/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-l9scct5u
cwd: /tmp/pip-install-unsihq35/pyaudio/
Complete output (16 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
copying src/pyaudio.py -> build/lib.linux-x86_64-3.8
running build_ext
building '_portaudio' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/src
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/neverhags/Development/diy-alexa/model/venv/include -I/usr/include/python3.8 -c src/_portaudiomodule.c -o build/temp.linux-x86_64-3.8/src/_portaudiomodule.o
src/_portaudiomodule.c:29:10: fatal error: portaudio.h: No existe el archivo o el directorio
29 | #include "portaudio.h"
| ^~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

ERROR: Failed building wheel for pyaudio

do you have some idea about what i can do? and thanks a lot! nice work with this voice control! i want to use to turn on/off my pc.

problem releated to the code

image

In the middle of this picture, the code has a function named "get_files" highlighted in purple clour, Are you use this function to import the background audio files to the jupyter notebook ??

export tflite model to c++

image

how to export tflite model to c++? please tell me I can't understand . can you explain a little bit please

I CANT GET SOUND OUTPUT

I biulded the projcet but i cant get voice output it takes the command like turns on the light but it cant create any sound to the speaker through I2S Amp
Screenshot 2022-05-11 114026

Hi Chris

I got an esp32 audio kit as its just a wrover with a codec built in https://www.banggood.com/ESP32-Aduio-Kit-WiFi-bluetooth-Module-ESP32-Serial-to-WiFi-Audio-Development-Board-with-ESP32-A1S-p-1449256.html

£13 not too pricey...

I got the ADF working which they just call https://github.com/Ai-Thinker-Open/ESP32-A1S-AudioKit its just download the toolchain set adf path to this and the idf path to the idf contained.

I have run a few of the examples and the complete ADF seems to work even if the onboard mics seem to be extremely insensitive.

I just wondered if you had done the same and tinkered with the ADF and maybe grasped how to set input volume or the ALC as seem to have it running but damned if I can tell the difference :)

Have you given them a go and the ADF as from about £9 they are at an interesting price point.

Hermes protocol ?

Hi, it could be fantastic implement Hermes protocol in order to use with Rhasspy.

Code functions

What programming language is you used for made the audio input folder code?? Python or C++

Input function

image

I use an I2S microphone then am I need to remove analog microphones ADC code from the firmware folder?? What programming language you used in this audio input file??

ProcessI2SData() ?

Hi, a question: what is the processI2SData function for?

void I2SMicSampler::processI2SData(uint8_t *i2sData, size_t bytesRead)
{
    int32_t *samples = (int32_t *)i2sData;
    for (int i = 0; i < bytesRead / 4; i++)
    {
        addSample(samples[i] >> 11);
    }
}

Directional microphone

Again not an issue but just wondering how much load an ESP32 can take but I2S is 2 channel so was wondering if digitally you could do something similar to this.

https://invensense.tdk.com/wp-content/uploads/2015/02/Low-Noise-Directional-Studio-Microphone-Reference-Design1.pdf

Beamforming
Beamforming involves processing the output of multiple microphones (or in this case, multiple mic arrays) to create a directional
pickup pattern. For recording and live sound applications it is important that the microphone only picks up sound from one
direction, such as from the singer or instrument, and attenuates the sound that is off the main axis. Beamforming is implemented in
this design using analog delays, an equalization filter, and a summing amplifier.
A two-element array is set up by placing two microphone boards distance, d, apart. A cardioid pattern
(Figure 4) is achieved by delaying the signal from one array board by amount of time it takes sound to travel between the two
boards, and subtracting this delayed signal from the signal from the first microphone array board. With this type of spatial response,
the microphone rejects sounds from the sides and rear, while picking up sounds incident to the front of the microphone.

We don't have the mic clusters but the stereo pair could be the two-element array with the delay of mic distance and subtraction done digitally?
Its just the delay part of https://hackaday.io/project/162628-audio-delay-and-vox-using-esp32 minus the vox.

Does the ESP32 lack the memory allocation to provide a stereo I2S input of the initial short delay of the mic distance of the speed of sound then enter the KW ring buffer?

Its poor mans beamforming but for many the improvement of having a directional mic rather than omnidirectional pickup of all, is a big plus for far field.

Own key word

Hey,

Really nice work!

maybe a stupid question: how can i train my own key word? I am still a newby to this.

Greetings

General esp32 question (spiffs)

so I don't think this is actually an issue with your code. It's more of a general esp32 question that I'm hoping you can give some insight on.

I've got the project running and working except I'm having trouble playing the .wav files. In one of the projects I was running before I was getting a 'SPIFFS failed to mount' error, but when I run this project, I don't get that error. However, I get some errors when trying to load the wave files:

ERROR: bit depth 16379 is not supported please use 16 bit signed integer
ERROR: sample rate 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073469932

I get the exact same messages for every wav file that the app attempts to open. I've called SPIFFS.format() but that doesn't do much to help. I am getting a value returned when I check for the total size (~1.3MB). I thought the esp32 has 4MB of SPI flash, and I don't see this spiff size in the spiffs config anywhere so I haven't verified if this is the correct value yet. I've tried running the project on two different dev boards so far but both behave the same way.

I just tried out this esp32 data uploader for the arduino ide, was able to upload one of the joke files and the app successfully played the file. It seems like it's only able to play the file once though. Maybe it's a platform.io issue?

Anyways, if you have any insight, I would love to hear it. thanks.

Guru Meditation Error

Hi,
I can wake it up when I called 'marvin' and it makes the 'ting' sound, but it no longer responds to subsequent commands and gives Guru Meditation Error as shown in the pic. Any suggestions please?

Screen Shot 2022-05-11 at 15 19 42

Generate Training Data

when I make a new wake word using "Generate Training Data" then How is there a connection between code and Generate Training Data in this project ??

FreeRTOS

I'm trying to build the code but I'm missing something with FreeRTOS.

How to install it into platformio?

With pio lib install... something?

Thanks

Command

Is this command work on windows cmd " xxd -i converted_model.tflite > model_data.cc " ??

HTTP response status is 200 but content is empty

Hi Chris, thank you so much for this open source project. You might have heard my voice if you checked your wit.ai app for this project.

I set up my hardware and flashed this code, then it works well for "Marvin" wake-up and following "tell me a joke" "what's the life" command.

The issue is:
But when I replace the URL and access_key with my settings(which works well with curl and my local recorded .wav samples), I don't get the expected JSON content, I added this debugging line in getResults(), but the returned entities, intents and text are all empty.

if (status == 200) { char temp[1024]; int read_cnt = m_wifi_client->readBytes(temp, 1024); Serial.printf("Http str is: %s\n", temp); }

Do you possibly have any clue? I am really confused since the only difference is the replacement with my wit.ai settings. And in my app, I did receive the recorded and uploaded voice sample. But it seems that the HTTP response got something wrong.

Thank you so much for any guidance for debugging.

Best regards,
Xu

Limitations to .WAV file?

Hi,
Are there any limitations to the .WAV file in this project? I tried with the voice generator and it works fine but when I tried with music the speaker didn't respond at all. Any suggestions?
Best.

Outputs Average detection time 95ms. But system isn't working.

I have tried uploading the sketch but on successful upload, This is what happens:

--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
ts Jun Starting up
Total heap: 312308
Free heap: 236128
E (2513) SPIFFS: mount failed, -10025
[E][SPIFFS.cpp:89] begin(): Mounting SPIFFS failed! Error: -1
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 16379 is not supported please use 16 bit signed integer
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
[E][vfs_api.cpp:22] open(): File system is not mounted
ERROR: bit depth 200 is not supported please us 16KHz
fmt_chunk_size=0, audio_format=0, num_channels=0, sample_rate=200, sample_alignment=7984, bit_depth=16379, data_bytes=1073462756
Loading model
12 bytes lost due to alignment. To avoid this loss, please make sure the tensor_arena is 16 bytes aligned.
Used bytes 22604

Created Neral Net
m_pooled_energy_size=43
Created audio processor
Starting i2s
Average detection time 95ms
Average detection time 95ms - REPEATS.

Here is my hardware:
1 x INMP441 MEMS Omnidirectional Microphone Module High Precision/SNR Low Power I2C Interface Supports ESP32

1 x ESP32 Development Board WiFi+Bluetooth

1 x I2S Audio Breakout - MAX98357A (Sparkfun USA)

build error

hi,
I got a compilation error and he showed

lib\tfmicro/tensorflow/lite/kernels/internal/max.h:29:10: error: 'fmax' is not a member of 'std'
lib\tfmicro/tensorflow/lite/kernels/internal/min.h:29:10: error: 'fmin' is not a member of 'std'

Dear Mr. atomic14, I look forward to your teaching

hi Dear Mr.atomic14
Based on your suggestions, I learned to report issues on GitHub, thank you!!!
Now I have successfully imported your project into PlatfromIO, but an error occurred when I compiled the project (lib\tfmicro/tensorflow/lite/kernels/internal/min.h:29:10: error:'fmin' is not a member of'std'), the error screenshot is as follows:
image
In addition, my steps are also marked in the screenshots, I look forward to your guidance~think you!

IDF port advice?

Hi atomic, many thanks for the firmware.

Apologies for opening an issue, as it's not an issue with your library.

A few months back I ported your firmware to IDF. While it works, I found when calling invoke() the response time was very slow (I think it was roughly 2.5 times the original).

I measured the time taken for code execution and the slow response came down to calling the TensorFlow function.

I was wondering if you had any insight into why this might be? Originally I thought it might be an issue with the C-linkage, however, I ran my code through Arduino too as .cpp.

If not, no worries; I was planning to wait for the C implementation for TensorFlow Micro 👍 . Cheers,

problem releated to the code

image

In the middle of this picture, the code has a function named "get_files" highlighted in purple clour, Are you use this function to import the background audio files to the jupyter notebook ??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.