Coder Social home page Coder Social logo

microsoft / ailab Goto Github PK

View Code? Open in Web Editor NEW
7.6K 426.0 1.4K 101.18 MB

Experience, Learn and Code the latest breakthrough innovations with Microsoft AI

Home Page: https://www.ailab.microsoft.com/experiments/

License: MIT License

C# 61.24% CSS 2.93% HTML 8.66% JavaScript 11.30% Shell 0.08% PowerShell 0.06% Python 6.49% Dockerfile 0.06% Batchfile 0.11% TypeScript 7.39% CMake 0.06% C++ 0.52% SCSS 0.98% Ink 0.11% ASP.NET 0.01%
computer-vision custom-vision ocr ai azure-functions algorithms luis language-learning translation bing-search

ailab's Introduction

Microsoft AI Lab

What is AI Lab?

AI Lab helps our large fast-growing community of developers get started on AI. You can experience, learn and code the latest and greatest innovations from Microsoft AI here. AI Lab currently houses eight projects that showcase the latest in custom vision, attnGAN, Visual Studio tools for AI, Cognitive Search, Machine Reading Comprehension and more. Each lab gives you access to the experimentation playground, source code on GitHub, a crisp developer-friendly video, and insights into the underlying developer/ organizational challenge and solution.

AI Lab is developed in partnership with Microsoft’s AI School and the Microsoft Research (MSR) AI organization.

Microsoft AI Lab Projects

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g. label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

Licensed under the MIT License.

ailab's People

Contributors

ajanyan avatar bishnoimahendra22 avatar bolasoma avatar chiijlaw avatar danielcaceresm avatar dark60man avatar dtranmobil avatar emepetres avatar ericmcmc avatar esterdenicolas avatar fpelaez avatar gsegares avatar imagentleman avatar jacano avatar macastejon avatar mattkohl avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar paulstubbs avatar pianobin avatar pran4ajith avatar rohan23chhabra avatar sakshamio avatar simpman4 avatar stanecobalt avatar suji04 avatar tarasha avatar wolframtheta avatar yashbhutoria avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ailab's Issues

Sketch2Code sample model

Hi,

Would it be possible to update the README for Sketch2Code and explain how to train the sample model using the dataset provided? Is there a fast way of linking the tags in dataset.json to the images in our own Custom Vision project or do we have to manually tag them?

Thanks

image.Handle is missing?

I hope there is still someone answering questions for this project.

I was trying to build this project, got an error in DecodedKinectCaptureFrame.cs file.

VirtualStage\VirtualStage\Speaker.Recorder\Speaker.Recorder\Kinect\DecodedKinectCaptureFrame.cs(63,63,63,69): error CS1061: 'Image' does not contain a definition for 'Handle' and no accessible extension method 'Handle' accepting a first argument of type 'Image' could be found (are you missing a using directive or an assembly reference?)

This is no definition for the Handle in the image. Any ideas to solve this?

Thanks

Kinect V1?

Is it possible with kinect V1? Actually kinect v1 SDK supports 4 kinects per machine....how good is it right? And affordable too! The kinect SDK 1.8 supports green screening too.

Can I modify this code for kinect SDK 1.8? Will it work? And btw I am beginner in coding skills 🙃

Any guidelines available for preparing sketch?

testing-testers
hand-sketch

Hi There -- thanks for putting this up... Amazing concept.

I tried uploading attached sketch (both hand-sketch & from MSPaint) but the results were a little different. Is there a guideline for preparing the sketch?

Also, is this project in a state wherein it can be used in a real-world program? or it is at research-paper level?

Appreciate the response.

How to download XAML code ?

Hello,
Thank you for this awesome feature.
How to change the option for the generate code language from HTML to XAML ?

Thanks.

Web scrapping

Any work around for the web scrapper script failing to download files on the pix2story

How to add a “start over” or “go back” button to my Azure Bot

So I have a bot that is solely buttons. So you start off and have for example, choice buttons A B C and D. You click button A and it will prompt you with more options: A1 A2 A3 and A4 and so on so on. I want to add “start over” and “go back” Buttons to these, but I’m not sure how. Any ideas?

Training question

Quick question : who is responsible for training the model to detect/recognize patterns? Where is the "hey .. this is not a textbox but a dropdown" section?

Is that simply performed by Azure Cognitive Services or is it done anywhere in the code?

Pix2Story: Model won't build - type mismatch when building optimizers.

On a new clone of your repo, I can't get the model to train. There's a type mismatch in the updates when building the optimizer.

Running on conda on macOS (using CPU). I didn't mess with any files, just added in the .txt file. I tried updating the n_words, changing various things in the config file but no luck.

Any help would be much appreciated. Thanks, Tom

Error message:

Building optimizers...
Traceback (most recent call last):
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 193, in rebuild_collect_shared
allow_convert=False)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/tensor/type.py", line 234, in filter_variable
self=self))
TypeError: Cannot convert Type TensorType(float64, matrix) (of Variable Elemwise{add,no_inplace}.0) into Type TensorType(float32, matrix). You can try to manually convert Elemwise{add,no_inplace}.0 into a TensorType(float32, matrix).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "training.py", line 6, in
EncTrainer.train()
File "/Users/tom/Documents/development/ailab/Pix2Story/source/training/train_encoder.py", line 40, in train
trainer(self.text, self.training_options)
File "/Users/tom/Documents/development/ailab/Pix2Story/source/skipthoughts_vectors/training/train.py", line 128, in trainer
f_grad_shared, f_update = eval(optimizer)(lr, tparams, grads, inps, cost)
File "/Users/tom/Documents/development/ailab/Pix2Story/source/skipthoughts_vectors/encdec_functs/optim.py", line 40, in adam
f_update = theano.function([lr], [], updates=updates, on_unused_input='ignore', profile=False)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')

Can't creat a data source

Using Postman version: 7.2.0
I import the collection & environment files.
can't find "Create Data Source request from the collection".
Browse->Collections->Azure Search: There is no Create Data Source

Web.Config

Hello Where can i get my key's to fill this out :

<add key="webpages:Version" value="3.0.0.0" />
<add key="webpages:Enabled" value="false" />
<add key="ClientValidationEnabled" value="true" />
<add key="UnobtrusiveJavaScriptEnabled" value="true" />
<add key="ObjectDetectionTrainingKey" value="<your_trainning_key>" />
<add key="ObjectDetectionPredictionKey" value="<your_prediction_key>" />
<add key="ObjectDetectionProjectName" value="<Lida>" />
<add key="ObjectDetectionIterationName" value="<your_iteration>" />
<add key="HandwrittenTextSubscriptionKey" value="<your_key>" />
<add key="HandwrittenTextApiEndpoint" value="<your_endpoint>" />
<add key="AzureWebJobsStorage" value="<your_endpoint>" />
<add key="Sketch2CodeAppFunctionEndPoint" value="<your_endpoint>" />
<add key="Probability" value="30" />
<add key="storageUrl" value="<your_storage_url>" />
<add key="ComputerVisionDelay" value="120" />

timestampfile.txt missing for VirtualStage

It fails when reconstructing the video. It is looking for timestamfile.txt file which is not there. What am I missing? I assume this file should be created/generated somewhere? I am running with --no_kinect_mask option

Traceback (most recent call last):
File ".\bg_matting.py", line 104, in
reconstruct_all_video(original_videos, args.output_dir, output_suffix, outputs)
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 7, in reconstruct_all_video
reconstruct_video(video, output_dir, suffix, outputs_list)
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 22, in reconstruct_video
video, out_path + suffix, os.path.basename(video) + suffix, o,
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 60, in write_output_timestamp_file
ts_in = open(os.path.join(input, "timestampfile.txt"), "rt")
FileNotFoundError: [Errno 2] No such file or directory: 'C:\test\testvideo\timestampfile.txt'

Getting an error every time I try to upload the test data via dotnet

Unhandled Exception: Microsoft.Rest.HttpOperationException: Operation returned an invalid status code 'Unauthorized'
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApi.GetDomainsWithHttpMessagesAsync(Dictionary`2 customHeaders, CancellationToken cancellationToken)
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApiExtensions.GetDomainsAsync(ITrainingApi operations, CancellationToken cancellationToken)
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApiExtensions.GetDomains(ITrainingApi operations)
at Import.Program.Main(String[] args) in ~/AISchoolTutorials-master/sketch2code/Import/Program.cs:line 29

I keep getting this error when I try to upload the JSON data. My key is correct although in step 2 in the tutorial I couldn't complete steps 10,11,12 as there is no notepad in the quick settings tab.

Any help is greatly appreciated. Thanks!

VirtualStage unusable matte results

Hi, was finally able to get the Virtual Stage to run but I'm getting mattes that are completely unusable. The setup is I'm standing in front of a green screen, wearing a bluejean shirt and pants. It has successfully eliminated the entire background and seems to be keeping skintones alright, but there are big holes in my shirt and pants. Any idea what's going on?

See attached pictures.
vlcsnap-2020-10-12-13h36m37s588
vlcsnap-2020-10-12-13h37m32s334

Text to Speech does not work on the introduction message.

Text to speech does not work on the introduction message. It only activates when we press on the microphone and hence it starts working from promt messages .
I want to build a bot which talks from the very first introduction message. How to do this?

CustomVision client failed to initialize. (<ObjectDetectionProjectName> Not Found.)

This issue occurred when try to send a POST request to the Azure Function detection API.

I believe the root cause is due to the Custom Vision Object Detection project type currently only allow Limit Trial, hence unable to create the Training and Prediction services under the user's own Azure subscription which link back to this project type

Error: (-215: Assertion failed) !_scr.empty() in function 'cv::cvtColor'

Hi

I'm having error when running processing the pictures.
Error screen reads as attached file 1.png
processing photos while having this error as attached file 2.png

I saved this project under below route
C:\Users\edz\Desktop\ailab-master\VirtualStage\BackgroundMatting

would you be able to help solving this issue?

thanks.

1

2

🚨 Potential OS Command Injection (CWE-78)

👋 Hello, @tarasha, @macastejon, @gsegares - a potential medium severity OS Command Injection (CWE-78) vulnerability in your repository has been disclosed to us.

Next Steps

1️⃣ Visit https://huntr.dev/bounties/1-other-microsoft/ailab for more advisory information.

2️⃣ Sign-up to validate or speak to the researcher for more assistance.

3️⃣ Propose a patch or outsource it to our community - whoever fixes it gets paid.

✏️ NOTE: If we don't hear from you in 14 days, we will proactively source a fix for this vulnerability (and open a PR) to ensure community safety.


Confused or need more help?

  • Join us on our Discord and a member of our team will be happy to help! 🤗

  • Speak to a member of our team: @JamieSlome


This issue was automatically generated by huntr.dev - a bug bounty board for securing open source code.

Error when uploading image

I get an error message in console when I try to upload an image. The message is:

Failed to load resource: the server responded with a status of 500 (Internal Server Error)

Realtime support

Hi there,
is this project suitable for realtime video frame streaming? Say I want to have matted output sent to Unity or another DCC package.

Please replace hardcoded URLs

In Step3.cshtml and Step5.cshtml there are several places that use hardcoded URLs to your blobstorage. When someone tries to get the code up and running for himself, this will result in the app not showing up selected images or its predicted output.

Virtual Stage-->BackgroundMatting - module not found 'proportional_threshold' in bg_matting.py line 9

Hi Guys, I am getting an error when I run /BackgroundMatting/run.ps1. It says the module on line 9 of bg_matting.py can't be found. Anyone else getting this error?

(base) PS C:\virtualstage\BackgroundMatting> ./run
Traceback (most recent call last):
File ".\bg_matting.py", line 9, in
from proportional_threshold import proportional_split, proportional_merge
ModuleNotFoundError: No module named 'proportional_threshold'

Responsive problem

I think you should improve the mobile version because in PC almost everything is good but not in mobile devices, there are many bad sections in small devices.

Text-to-speech module

In the "Deploy to Azure from Visual Studio" section for the "Adding Speech support (Text-to-Speech and Speech-to-Text)" section, the instruction says:
"Open the appsettings.json file.
Replace the values of MicrosoftAppId and MicrosoftAppPassword with the values you got from Azure."
However, the appsettings.json file doesn't have any properties like that.
Also, are we supposed to delete the properties still there for the settings that we deleted in step 5: LuisAPIHostName, LuisAPIKey, LuisAppId ?
I added 2 new props in appsetting.json for the MicrosoftAppId and MicrosoftAppPassword, but it doesn't seem to work...
appsettings json

File missing from BackgroundMatting folder in VirtualStage?

bg_matting.py is referencing proportional_threshold but it doesn't seem to be part of the project. Based on naming conventions in project it seems like a proportional_threshold.py file is missing? Or is this imported from somewhere else?

-------------- from bg_matting.py:
from proportional_threshold import proportional_split, proportional_merge

File ".\bg_matting.py", line 9, in
from proportional_threshold import proportional_split, proportional_merge
ModuleNotFoundError: No module named 'proportional_threshold'

Change speech recognition language to de-de

I am not managing to make the chatbot recognize my speech input as German. I managed to change the text-to-speech to German, but when I talk into the mic the bot thinks I talk English. I saw that I can possibly change it via
config.SpeechRecognitionLanguage = "de-de";
But honestly, I have no clue where to put it.
Am I right that I have to change something in the SpeechModule.js?

I solved it by changing
o = i.SpeechResultFormat.Simple, s = e.locale || "en-US"
to
o = i.SpeechResultFormat.Simple, s = "de-DE"
in wwwroot/lib/CognitiveServices.js

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.