Coder Social home page Coder Social logo

googlecreativelab / teachablemachine-community Goto Github PK

View Code? Open in Web Editor NEW
1.4K 58.0 629.0 50.48 MB

Example code snippets and machine learning code for Teachable Machine

Home Page: https://g.co/teachablemachine

License: Apache License 2.0

JavaScript 7.04% TypeScript 49.79% HTML 4.05% Dockerfile 3.24% Processing 1.96% C++ 16.28% C 2.90% Python 10.99% Shell 3.74%

teachablemachine-community's Introduction

Teachable Machine Community

Teachable Machine

What is Teachable Machine?

Teachable Machine is a web-based tool that makes creating machine learning models fast, easy, and accessible for everyone. You can try it here.

Who is it for?

Educators, artists, students, innovators, makers of all kinds – really, anyone who has an idea they want to explore. No prerequisite machine learning knowledge required.

How does it work?

You train a computer to recognize your images, sounds, and poses without writing any machine learning code. Then, use your model in your own projects, sites, apps, and more.

What is this repository for?

This repository contains two components of Teachable Machine:

  1. A libraries section that contains all of the machine learning code used in Teachable Machine. Under the hood we use Tensorflow.js, a library for machine learning in Javascript, to train and run the models you make in your web browser. The libraries section also contains the API for image, audio, and pose helper libraries that make it easier to use the models exported by Teachable Machine in your own projects.

  2. A snippets section that contains markdown snippets that are being displayed inside the export panel in Teachable Machine. These snippets contain code and instructions on how to use the exported models from Teachable Machine in languages like Javascript, Java and Python.

How can I send feedback or get in contact with you?

You have a few options:

Community Contributions and Projects

Disclaimer

This is an experiment, not an official Google product. We’ll do our best to support and maintain this experiment but your mileage may vary.

We encourage open sourcing projects as a way of learning from each other. Please respect our and other creators’ rights, including copyright and trademark rights when present, when sharing these works and creating derivative work. If you want more info on Google's policy, you can find that here.

teachablemachine-community's People

Contributors

alexanderchen avatar alikarpuzoglu avatar bomanimc avatar danielwilczak101 avatar halfdanj avatar hapticdata avatar irealva avatar joeyklee avatar khanhlvg avatar leeyunjai82 avatar mikakruschel avatar mqcmd196 avatar mrxdst avatar natowi avatar scottamain avatar shiffman avatar snehitvaddi avatar sryu1 avatar tailorware avatar toddinlb avatar zelacerda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

teachablemachine-community's Issues

Uploading zip file with audio files

Hi,

Is there any plan or way to upload our own dataset containing mp3 files? That would be perfect if we are able to upload pre-recorded audio files to the teachable machines.

As far as I understand, currently the only way is to record the data through the teachable machine.

Different inference results

Hi, I was testing Teachable machine on the browser and good pretty good results for a personal project. However, when downloading the generated model to Keras and using the code snippet that you provide, the inference results are totally different with the same test set. Could it be because the Keras snippet is resizing the images in a different way to the teachable machine application in the browser? Any ideas on how to fix it?
Many thanks for your help!

iOS compatibility

Hi,
when I am importing my model and generating a sharable link,
when trying to use it via my iPhone (both on Safari and Chrome) I get an error:

IMG_3802

I have checked the camera permissions and it should be working...

Thanks!

Accurancy of image dosen't match

This is the accuracy of the teachable machine:
image

This is the accuracy of my tfjs (using the exact same model) :
image

### The accuracy isn't the same as the machine!

This is my code:

<img id="pic_a" src="#"></img>
const URL = "https://teachablemachine.withgoogle.com/models/L_4YPRGX/";

let model, labelContainer, maxPredictions;`

init();
async function init() {
const modelURL = URL + "model.json";
const metadataURL = URL + "metadata.json";
model = await tmImage.load(modelURL, metadataURL);
maxPredictions = model.getTotalClasses();
}

var prop_total_a = 0;

async function predict() {
const prediction = await model.predict(document.getElementById("pic_a"), false)
console.log(prediction);
}

Thanks in advance!

Using the libraries in non-browser NodeJS

(node:3904) UnhandledPromiseRejectionWarning: ReferenceError: FileReader is not defined
    at D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:160:42
    at new Promise (<anonymous>)
    at BrowserFiles.<anonymous> (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:159:39)      at step (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:48:23)
    at Object.next (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:29:53)
    at D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:23:71
    at new Promise (<anonymous>)
    at __awaiter (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:19:12)
    at BrowserFiles.load (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-core\dist\io\browser_files.js:153:16)
    at D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:279:50
    at step (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:54:23)
    at Object.next (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:35:53)
    at D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:29:71
    at new Promise (<anonymous>)
    at __awaiter (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:25:12)
    at loadLayersModelFromIOHandler (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:267:12)          at Object.<anonymous> (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:251:35)
    at step (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:54:23)
    at Object.next (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:35:53)
    at D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:29:71
    at new Promise (<anonymous>)
    at __awaiter (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:25:12)
    at Object.loadLayersModelInternal (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\models.js:230:12)        at Object.loadLayersModel (D:\Code\JS\SAHBot\node_modules\@tensorflow\tfjs-layers\dist\exports.js:224:21)
    at Object.<anonymous> (D:\Code\JS\SAHBot\node_modules\@teachablemachine\image\dist\custom-mobilenet.js:378:49)          at step (D:\Code\JS\SAHBot\node_modules\@teachablemachine\image\dist\custom-mobilenet.js:49:23)
    at Object.next (D:\Code\JS\SAHBot\node_modules\@teachablemachine\image\dist\custom-mobilenet.js:30:53)
    at D:\Code\JS\SAHBot\node_modules\@teachablemachine\image\dist\custom-mobilenet.js:24:71
(node:3904) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:3904) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

my code:

const tf = require("@tensorflow/tfjs");
const tfn = require("@tensorflow/tfjs-node");
const tmImage = require('@teachablemachine/image');

tmImage.loadFromFiles('model.json', 'weights.bin', 'metadata.json').then(model => {
 ...
});

Tensorflow Savedmodel Export doesn't work

When trying to download the model as a "Savedmodel", after converting it in the cloud, the browser tries to redirect to a 404 page and the downloaded zip just contains the labels.txt and an empty folder "model.savedmodel", but not the actual model. The keras export on the other hand now seems to work, but also tries to redirect to the 404.

How to join 2 trained model data?

Hi! I was wondering, how can I join 2 trained model data together? e.g my friend wants me to help with training AI to recognize some objects. I know we could take and share photos but is there any other clever way? Maybe joining JSON data?

Uncaught (in promise) ReferenceError: require is not defined

After generating a model on https://teachablemachine.withgoogle.com/train/audio, I receive the following error when trying to run it.

Uncaught (in promise) ReferenceError: require is not defined
    at speech-commands.min.js:17
    at speech-commands.min.js:17
    at Object.next (speech-commands.min.js:17)
    at speech-commands.min.js:17
    at new Promise (<anonymous>)
    at i (speech-commands.min.js:17)
    at speech-commands.min.js:17
    at e.<anonymous> (speech-commands.min.js:17)
    at speech-commands.min.js:17
    at Object.next (speech-commands.min.js:17)

Any idea?

Export Class + Download Samples to images + Pascal VOC XML?

I already have my own setup locally to generate training models. Would it be possible to use Teachable Machine not for the training function but just for the webcam/samples (especially the “Import images from Google Drive”) function to export images with the option to save the Class labels in Pascal VOC XML format?

offline demo

First of all thanks, and congratulations to the team of GoogleCreativeLabs !
A few questions regarding a classroom demo:

  1. I would like to demo teachable machine offline. I plan to use a few Raspberry PI's in a classroom without WIFI. I would like to use and tweak the existent online demo e.g. teachablemachine.withgoogle.com/train (html/js), is the content/source available ? If not is there a similar project/framework/demo public available ?

  2. I plan to use a websocket PI Camera is working from the localhost but does not work from a local lan IP, is a security directive (google web code) ?

Thanks for reading.

Renaming class CSS minor problem

When I typed 'left' I thought I hadn't typed the 'l' but it was there just hard to see.
image
When I type ' left' it looks fine:
image

Chrome Version 78.0.3904.97 (Official Build) (64-bit)

Teach new folks how to find AI principles and Inclusive ML resources

Congrats on the launch! 👍

It's amazing to see a link to related K12 curriculum, and a link to inclusive ML under the section about concerns with saving models. I wonder if the primary audience is...

Educators, artists, students, innovators, makers of all kinds – really, anyone who has an idea they want to explore. No prerequisite machine learning knowledge required.

...whether this tool for designing ML systems could also introduce people to other awesome Google work like AI principles, Responsible AI Practices and Inclusive ML? It'd be awesome to find ways where tools that help people make their first ML models, also give them guardrails and help them work towards principles like:

  • "Be built and tested for safety"
  • "Check the system for unfair biases"
  • "Understand the limitations of your dataset and model"

So I'm wondering what's the minimum this tool could provide to help folks who are new to ML in discovering all the amazing work around evaluation, fairness, interpretability, and ML design (Fairness Indicators: Thinking about Fairness Evaluation and People+AI Guidebook to cite two others). The folks at ML for Kids (tool, code) have done a bunch of awesome work including lessons about this in their tools, so that's maybe another place to find ideas on this.

Hopefully Teachable Machine 2.0 invites a whole new set of people into designing ML, and get them asking these kinds of questions about how well the models they've created work, and how they can understand, evaluate, debug and improve them. 😄

Keras .h5 model not supported by Deeplearning4j

I don't know if this is a bug in deeplearning4j, or the model exports incorrectly.

If I try to import it with KerasModelImport.importKerasSequentialModelAndWeights(...), it throws "org.deeplearning4j.nn.modelimport.keras.exceptions.UnsupportedKerasConfigurationException: Unsupported keras layer type Sequential."

The Deep Learning world is new to me, so currently I can't say more about it.

Error when starting the training

I made a sound project and after gathering examples I get following error:

index.bundle.js:2 Uncaught (in promise) Error: numFrames (43) exceeds the minimum numFrames (14) among the examples of the Dataset.
at Object.S [as assert] (index.bundle.js:2)
at t.getData (index.bundle.js:104)
at e.collectTransferDataAsTfDataset (index.bundle.js:104)
at e. (index.bundle.js:104)
at index.bundle.js:104
at Object.next (index.bundle.js:104)
at index.bundle.js:104
at new Promise ()
at s (index.bundle.js:104)
at e.trainOnDataset (index.bundle.js:104)

Do I have to consider something when recording audio?

How can I use the pretrained audio model in Python?

Thanks for making this awesome project!
But how can I use the pretrained audio model in Python?
I want to use Keras or Tensorflow in Python to train my own model through the transfer learning on your introduced pretrained model, Speech-Commands.
Could you please give some instructions to help me do this?

Thanks a lot!

Installing OpenCV Error

Everytime I try to install OpenCV with the given Command pip3 install Pillow opencv-python opencv-contrib-python
I get the Error:

Collecting Pillow
Using cached https://files.pythonhosted.org/packages/5b/bb/cdc8086db1f15d0664dd22a62c69613cdc00f1dd430b5b19df1bea83f2a3/Pillow-6.2.1.tar.gz
Collecting opencv-python
Could not find a version that satisfies the requirement opencv-python (from versions: )
No matching distribution found for opencv-python

I use the Google Coral Dev Board.

Hopefully somebody can tell me how I can fix this problem

How to open .h5 file in Python?

I tried teachable machine image classification model. I has downloaded (.h5) model file of the image model I built using "Export Model>tensorflow>keras". I want to view the code in Pycharm IDE. How to open the (.h5) model in python. Please advice.

Data augmentation perfomed

What sorts of image data augmentation are performed on the uploaded data set if any? I would like to perform some in advance but will not do so if it will be unnecessary effort.

How can i open rear camera by default

How can i open rear camera by default instead of front camera?
Or how can i select front or rear camera when required?

Got only flip option is code:
const flip = true; // whether to flip the webcam
webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip

Thanks

Something went wrong

To whom it may concern,

I tried training but couldn't export.

  • Converts your model to a keras.h5 model. Note the conversion happens in the cloud, but your training data is not being uploaded, only your trained model.

Because of the message above, I also tried steps below but no luck yet.

  • started a new project
  • record for classes
  • trained and downloaded the project (which comes with the images)
  • started a new project
  • uploaded the images
  • trained and tried exporting

Just in case I also attached the message of browser,

Screenshot from 2020-03-05 07-59-34

The error message says:

My working environment is:

  • Ubuntu 19.10 on x86_64
  • Chrome Version 80.0.3987.132 (Official Build) (64-bit)

Hope this helps somehow.
(if there is a known solution, please let me know...)

Keras: unrecognized keyword arguments

When running the example code from the keras section, the following error occurs:

Traceback (most recent call last):
  File "/home/pi/Documents/converted_keras/TMexample.py", line 7, in <module>
    model = tensorflow.keras.models.load_model('/home/pi/Documents/converted_keras/keras_model.h5')
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py", line 137, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 162, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/serialization.py", line 97, in deserialize
    printable_module_name='layer')
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 191, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/sequential.py", line 358, in from_config
    custom_objects=custom_objects)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/serialization.py", line 97, in deserialize
    printable_module_name='layer')
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 191, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/sequential.py", line 358, in from_config
    custom_objects=custom_objects)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/serialization.py", line 97, in deserialize
    printable_module_name='layer')
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 191, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1130, in from_config
    process_layer(layer_data)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1114, in process_layer
    layer = deserialize_layer(layer_data, custom_objects=custom_objects)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/serialization.py", line 97, in deserialize
    printable_module_name='layer')
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 193, in deserialize_keras_object
    return cls.from_config(cls_config)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 456, in from_config
    return cls(**config)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/input_layer.py", line 81, in __init__
    raise ValueError('Unrecognized keyword arguments:', kwargs.keys())
ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

Any guidance on how to get the model running?

Teachable Machine V2: Exporting doesn't work

dunno if this is the right place to report this

Failed to load resource: the server responded with a status of 404 ()
image:1 Access to fetch at 'https://converter-release-pigdgyswcq-uc.a.run.app/convert/image/keras' from origin 'https://teachablemachine.withgoogle.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
index.bundle.js:1413 TypeError: Failed to fetch

https://converter-release-pigdgyswcq-uc.a.run.app/convert/image/keras returns a 404.

Browser is Chromium 78.0.3904.87 64-bit.

Model Cards: Add kid-friendly cards for models used, and for models created

This project is an awesome introduction to ML design, hopefully for many people! 👍

It would be amazing if it could model awesome practices in ML design, and adding model cards seems like one great step:

In order to clarify the intended use cases of machine learning models and minimize their usage in contexts they are not well-suited for, we recommend that released models be accompanied by documentation detailing their performance characteristics.

It would be amazing to do this in an accessible and kid-friendly way for the models used in the training process, even if to start it's as simple as:

For image classification, Teachable Machine uses a model created with the MobileNet v2 architecture trained on the ImageNet dataset.

Further, It would be amazing if the tool prompted users to add a model card to the ML models that it helps them create. Especially for first-time folks, a one line prompt to "Describe the training data so people can decide where it's safe to use this model" with a link to more info could be a powerful opportunity for teaching folks who are new to ML. So when people share, this could be included along with other metadata:

Created with Teachable Machine, trained with transfer learning using 46 images on 11/7/19. The pre-trained model underneath used the MobileNet v2 architecture trained on the ImageNet dataset. Note from designer: I made this from a few images on the webcam in class today.

I'm happy to brainstorm further or sketch something or collaborate however else if folks are open to it. Thanks!

Problem with Tensorflow.js Model Promise

Hello, I have a problem. I'm trying to use image classification with an image input file. But the model predicts alwayse the same thing. It's not related to the model because I tested it.

image
image
image
image

The recognition ability of the Audio model

Excuse me, I have search that the audio pretrained model used in this project is Speech-Command, and it use over 105,000 WAVE audio files of people saying thirty different words.

So this base model have the ability to well recognize many different words, and its learned low level feature should be only associated with the speech only.

But my question is that why the transfer learning model trained on some very different audio samples like clap table, water sounds, whistle, etc, such non-speech sounds, are also magically perform very well?

Grouping multiple images as single class

I was just wondering if it would be possible to use multiple images as a single class.
For example:
Image1,Image2, Image3 (Class 1)
Image4,Image5, Image6 (Class 1)

Image7,Image8, Image9 (Class 2)
Image9,Image10, Image11 (Class 2)

Thank you.

TypeError: __init__() got an unexpected keyword argument 'ragged'

TypeError Traceback (most recent call last)
in ()
----> 1 model=load_model('keras_model.h5')

~\Anaconda3\lib\site-packages\keras\engine\saving.py in load_model(filepath, custom_objects, compile)
417 f = h5dict(filepath, 'r')
418 try:
--> 419 model = _deserialize_model(f, custom_objects, compile)
420 finally:
421 if opened_new_file:

~\Anaconda3\lib\site-packages\keras\engine\saving.py in _deserialize_model(f, custom_objects, compile)
223 raise ValueError('No model found in config.')
224 model_config = json.loads(model_config.decode('utf-8'))
--> 225 model = model_from_config(model_config, custom_objects=custom_objects)
226 model_weights_group = f['model_weights']
227

~\Anaconda3\lib\site-packages\keras\engine\saving.py in model_from_config(config, custom_objects)
456 'Sequential.from_config(config)?')
457 from ..layers import deserialize
--> 458 return deserialize(config, custom_objects=custom_objects)
459
460

~\Anaconda3\lib\site-packages\keras\layers_init_.py in deserialize(config, custom_objects)
53 module_objects=globs,
54 custom_objects=custom_objects,
---> 55 printable_module_name='layer')

~\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
143 config['config'],
144 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 145 list(custom_objects.items())))
146 with CustomObjectScope(custom_objects):
147 return cls.from_config(config['config'])

~\Anaconda3\lib\site-packages\keras\engine\sequential.py in from_config(cls, config, custom_objects)
298 for conf in layer_configs:
299 layer = layer_module.deserialize(conf,
--> 300 custom_objects=custom_objects)
301 model.add(layer)
302 if not model.inputs and build_input_shape:

~\Anaconda3\lib\site-packages\keras\layers_init_.py in deserialize(config, custom_objects)
53 module_objects=globs,
54 custom_objects=custom_objects,
---> 55 printable_module_name='layer')

~\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
143 config['config'],
144 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 145 list(custom_objects.items())))
146 with CustomObjectScope(custom_objects):
147 return cls.from_config(config['config'])

~\Anaconda3\lib\site-packages\keras\engine\sequential.py in from_config(cls, config, custom_objects)
298 for conf in layer_configs:
299 layer = layer_module.deserialize(conf,
--> 300 custom_objects=custom_objects)
301 model.add(layer)
302 if not model.inputs and build_input_shape:

~\Anaconda3\lib\site-packages\keras\layers_init_.py in deserialize(config, custom_objects)
53 module_objects=globs,
54 custom_objects=custom_objects,
---> 55 printable_module_name='layer')

~\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
143 config['config'],
144 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 145 list(custom_objects.items())))
146 with CustomObjectScope(custom_objects):
147 return cls.from_config(config['config'])

~\Anaconda3\lib\site-packages\keras\engine\network.py in from_config(cls, config, custom_objects)
1020 # First, we create all layers and enqueue nodes to be processed
1021 for layer_data in config['layers']:
-> 1022 process_layer(layer_data)
1023 # Then we process nodes in order of layer depth.
1024 # Nodes that cannot yet be processed (if the inbound node

~\Anaconda3\lib\site-packages\keras\engine\network.py in process_layer(layer_data)
1006
1007 layer = deserialize_layer(layer_data,
-> 1008 custom_objects=custom_objects)
1009 created_layers[layer_name] = layer
1010

~\Anaconda3\lib\site-packages\keras\layers_init_.py in deserialize(config, custom_objects)
53 module_objects=globs,
54 custom_objects=custom_objects,
---> 55 printable_module_name='layer')

~\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
145 list(custom_objects.items())))
146 with CustomObjectScope(custom_objects):
--> 147 return cls.from_config(config['config'])
148 else:
149 # Then cls may be a function returning a class.

~\Anaconda3\lib\site-packages\keras\engine\base_layer.py in from_config(cls, config)
1107 A layer instance.
1108 """
-> 1109 return cls(**config)
1110
1111 def count_params(self):

~\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

TypeError: init() got an unexpected keyword argument 'ragged'

Importing Keras Model with Python

I exported an image model using the Keras format (.h5 file), but cannot load it. In Python I'm doing the following.

from keras.models import load_model model = load_model('keras_model.h5')

But I get the following error
TypeError: __init__() got an expected keyword argument 'ragged'

Is this a problem with the export format or is there a different way to load it?

Clarify transformations for image models at inference time

Hello!

I think it might be helpful to clarify the transformations that images go through in the docs, and maybe provide a public method that encapsulates that. Here's my current understanding.

1. Types

You can pass a few different types:

export type ClassifierInputSource = HTMLImageElement | HTMLCanvasElement | HTMLVideoElement | ImageBitmap;

2. cropTo

This image data is copied into a new canvas, cropped with cropTo. This sizes to 224x224, and uses a strategy like "cover", resizing the image to be at least 224x224 and then cropping from the center.

3. capture

The call to capture grabs the pixels from the image, and then crops that tensor with cropTensor. This crop enforces that the image is square, but here it doesn't do anything, since the image itself has already been cropped to be square in cropTo. Finally it normalizes the values in RGB space to [-1,1] here.

4.Transparency

It also seems like fully transparent pixels might be translated to rgb(0,0,0) as well. That happened in one example image I tried, but I didn't look further.

Is that capturing it? These scaling, cropping and color changes seem like they would be important for callers (or users) to be aware of.

Exposing as a function

I think ideally this library would also expose any pre-processing for callers to use as well. That way tools using this can use the same pre-processing as well. Otherwise, if you made a tool that visualized the images the model predicts you might naively render the input image (which isn't actually what the TM model sees), or analyze how the TM model compares to other models (without using the same pre-processing step). Concretely, one suggestion there would be to expose something like:

model.preprocess(image: ClassifierInputSource)

Returns a Tensor representing the image, after applying any transformations that the model applies to an input (eg, scaling, cropping or normalizing). The specifics of the particular transformations are part of the internals of the library and subject to breaking changes, but this public method will be stable.

Args:

  • image: an image, canvas, or video element to make a classification on

Usage:

const img = new Image();
img.src = '...'; // some image that is larger than 224x224px and not square and has some transparency
img.onload = async () => {
  const tensor = await model.preprocess(img);
  const canvas = document.createElement('canvas');
  await tf.browser.toPixels(tensor, canvas);
  document.body.appendChild(canvas);
}
document.body.appendChild(img);

original

Screen Shot 2019-07-30 at 1 55 34 PM

pre-processed

(note also the color in the background, which I'm assuming was introduced by translating from [0,255] => [-1,1] => [0, 255] but didn't look further)
Screen Shot 2019-07-30 at 1 54 48 PM

example code

Thanks for sharing this awesome work 😄

Audio model export to Tensorflow, TF Lite, and EdgeTPU?

First of all, thanks for making Teachable Machine. It's an awesome educational tool, making ML accessible to coders and non-coders alike.

I am trying to train an audio model on Teachable Machine, and use it on a Coral USB accelerator for keyphrase detection. Unfortunately, it seems able to export in Tensorflow.js only, unlike the case of image model where export format can be in Tensorflow.js, Tensorflow, TF Lite, and EdgeTPU.

It would be perfect if the same set of choices can be added to audio model, so whatever models trained on Teachable Machine can be transferred readily to use on a Coral USB accelerator.

Thank you for the consideration.

Exported Keras Model classifying incorrectly

The downloaded Keras model classifies images differently than the Online Version. Probably the Image preprocessing in the provided Python Snippet is different from the one used on the website.

I build a two-class classifier for 3-channel vibration data (represented as RGB images) that works perfectly in the online version of Teachable Machine. Impressive Work by the way!
But if I export the model to Keras and try the same predictions in Google Colab it always predicts the same class.
Link to Teachable Machine:
https://teachablemachine.withgoogle.com/models/dhRmTcji/

Link to Colab:
https://colab.research.google.com/drive/12LGL5LDGNdoESE7L7tyWYW-YakzNOOhF

The Colab Notebook also contains links to download training and test set.

This is most likely a preprocessing issue.

It would be great if you could have a look.
Thank you in advance!

Saved audio model incorrectly triggers ' Error: Unsupported URL scheme in metadata URL'

The following occurs with localhost and also at https://ecraft2learn.github.io/ai/teachablemachine/sound/

./my_model/metadata.json is https:// but isn't recognized as such

(index):57 Uncaught (in promise) Error: Unsupported URL scheme in metadata URL: ./my_model/metadata.json. Supported schemes are: http://, https://, and (node.js-only) file://
at speech-commands.min.js:17
at speech-commands.min.js:17
at Object.next (speech-commands.min.js:17)
at speech-commands.min.js:17
at new Promise ()
at i (speech-commands.min.js:17)
at speech-commands.min.js:17
at e. (speech-commands.min.js:17)
at speech-commands.min.js:17
at Object.next (speech-commands.min.js:17)

Deviating results (online / offline) - picture classification - Python

Hi,

I'm having problems while using the trained model locally (Python). I'm getting extremely different results between local execution and browser usage. I have adopted the suggested Tensorflow / Keras code one to one.
When I train the model and upload my image in the browser I get a result of 90 / 10:

image

However, as soon as I let the image pass through the model locally (via the proposed code), I get a distribution of only 35 / 65. The result should be class 1 - so the browser applications performs much better than the local version.

I have simulated this behavior in the following colab:

https://colab.research.google.com/drive/18gDSEv66XV4DXyahCFUEmaWn4zOnf2K8

Both TF 1.15.0 and 2.1 were tested - I'm getting the same results.

Does anybody know where might be the error?

Many thanks in advance (:

Tensorflow download

i trained a model, and everytime i try to download the keras model, it tries for like a second, but stops and never downloads, but can download the tensorflow.js model almost instantly.

Update snippet to reflect zip file name?

I made an image classifier model and downloaded it to to use the model in TensorFlow.js. The zip file that it downloaded was called tm-my-image-model.zip, but the code snippet was expecting the model to be unzipped and put in a folder called my_model.

Would it make sense for these to match, to remove a potential source of friction for folks who try to download their model, unzip it and run the snippet as is? Concretely, the zip filename could be my_model if that's what the snippet suggest, or the snippets could be updated to look in the tm-my-image-model path. That might help for folks new to programming using the first model that they create.

ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

the model doesnt load error at
model = tensorflow.keras.models.load_model('keras_model.h5')

File "/home/imran/AlphaSquad/FaceRecognition/keras.py", line 9, in
model = tensorflow.keras.models.load_model('converted_keras/keras_model.h5')
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 137, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 162, in load_model_from_hdf5
custom_objects=custom_objects)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize
printable_module_name='layer')
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 353, in from_config
custom_objects=custom_objects)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize
printable_module_name='layer')
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 353, in from_config
custom_objects=custom_objects)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize
printable_module_name='layer')
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1123, in from_config
process_layer(layer_data)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1107, in process_layer
layer = deserialize_layer(layer_data, custom_objects=custom_objects)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize
printable_module_name='layer')
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object
return cls.from_config(cls_config)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 451, in from_config
return cls(**config)
File "/home/imran/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/input_layer.py", line 80, in init
raise ValueError('Unrecognized keyword arguments:', kwargs.keys())
ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

p5.js (ml5) snippet

Hello! I am working on an example for Teachable Machine and p5.js. I made some adjustments to the current snippet in the beta. My suggested code is at the bottom. I've added some comments and also removed the need for DOM elements opting to render the label text to the canvas itself.

Screen Shot 2019-10-22 at 5 27 18 PM

One small issue is that the video is not mirrored, I am thinking about how to handle this.

Also, I think rather than copy/paste it might make sense for the interface to provide a link to a p5.js web editor template where the user just has to copy in the model URL. At a minimum, something should note that the code will only run with the p5 and ml5 libraries, e.g.:

<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/p5.js"></script>
<script src="https://unpkg.com/[email protected]/dist/ml5.min.js"></script>

Would you like me to create the template as part of the ml5 web editor account / examples?

(I should note 0.4.1 does not yet exist, but will include a bug fix that makes the example run, see: ml5js/ml5-library#650)

// Classifier Variable
let classifier;
// Model URL
let imageModel = 'https://storage.googleapis.com/tm-models/WfgKPytY/model.json';

// Video
let video;

// To store the classification
let label = "";

// Load the model first
function preload() {
  classifier = ml5.imageClassifier(imageModel);
}

function setup() {
  createCanvas(320, 260);
  // Create the video
  video = createCapture(VIDEO);
  video.size(320, 240);
  video.hide();

  // Start classifying
  classifyVideo();
}

function draw() {
  background(0);
  // Draw the video
  image(video, 0, 0);

  // Draw the label
  fill(255);
  textSize(16);
  textAlign(CENTER);
  text(label, width / 2, height - 4);
}

// Get a prediction for the current video frame
function classifyVideo() {
  classifier.classify(video, gotResult);
}

// When we get a result
function gotResult(error, results) {
  // If there is an error
  if (error) {
    console.error(error);
    return;
  }
  // The results are in an array ordered by confidence.
  console.log(results[0]);
  label = results[0].label;
  // Classifiy again!
  classifyVideo();
}

cc @joeyklee

Performance troubles with image classification

I just opened the website and tried to load some images to classify. The problem is I need to use ~10k images for each class but the browser somehow can upload 6k and for one class only and even in this case my pc becomes too slow.
What could you advice me to do in this case? What about to replace image thumbs with progress bar if it helps?

webcam.setup options or constraints

Greetings,

First of all I would just like to say How thankful I am to your team. You've done amazing work. I used to train with python... this just took python out of the equation! and braught me straight to my favorite javascript.

for my question.

I used my own webcam script before and I used media constraints to direct it to use the backfacing camera of a smart phone using the following:

        let constraints = {
            audio: false,
            video: {
                facingMode: "environment"
            }
        }

so I looked up the API at the following:

https://github.com/googlecreativelab/teachablemachine-community/tree/master/libraries/image

and found this section:

webcam.setup(
	options: MediaTrackConstraints = {}
)

now when I plug in my constraints into the arguments for webcam.setup()....this is what happens.

  1. on Firefox for android, it asks me if i want to use the front or rear facing camera.
  2. On chrome for android, i don't get to choose and I'm forced to use the front facing camera.

Is there anyway to get the mediaTrackConstraint settings to work? or to setup the script to use the rear facing camera by default and only use the front facing when there's no other camera option? or maybe a can set up a modal or a switch to determine which camera to use.
On fire fox, it's also veary dark if i use the rear camera. Are these device issues? or can they be fixed with a script or a bit of code I'm missing. Thanks again.

UPDATE:
I checked the latest at MDN

https://developer.mozilla.org/en-US/docs/Web/API/Media_Streams_API/Constraints

Seems the format has changes now to this

        let constraints = {
             
                facingMode: "environment"

        }

It seems facingMode is no longer under video Everythinig works now. Thanks again for this amazing project.

Teachable Machine Audio Model - .wav, .mp3 as input?

Dear,

First of all, I appreciate this project - amazing that someone can transform complex calculations into a simple tool like this. Good work.

Is there a way how to import own sounds like .wav, or .mp3 format?

Thank you,

Martin.

iphone issue

In my iphone, video is getting freeze but in android it is working properly.
Any suggestion for iphone?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.