Robin Raj SB's Projects
500 AI Machine learning Deep learning Computer vision NLP Projects with code
Experience, Learn and Code the latest breakthrough innovations with Microsoft AI
Azure OpenAI (demos, documentation, accelerators).
This repo contains the sample code of the Azure Search and Cognitive Services used to provide insights and analysis around the JFK Files.
This project is made for as a proof of concept for augument the treatment of brain tumer, this is done for one of my friend, for her 🙎♀️ final year project.
CodeGen is an open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
The coronavirus dataset
This script does topic modelling on the latest academic pre-prints on coronavirus to see if there were any unusual patterns. Its a total experiment and I have written an article summarising the things I thought were interesting.
Facial recognition model to predict if a person is wearing a mask or not.
A minimal web app developed with Flask
Flask-Video-Stabilization Provide a reasonably easy and flexible way to stabilize (deshake) even strongly jiggled video clips
App Flutter with TensorFLow to detected Objects in images
Hands-On Computer Vision with TensorFlow 2, published by Packt
:mag: Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
a simple icon for an android application
an icon for a photography company
a product icon for an app
Launch machine learning models into production using flask, docker etc.
A material design Music player Designed using google material design framework.
Material Design Components in HTML/CSS/JS
NAO Robot ChatBot using DialogFlow API, and Performing Actions for Response
NAO_ChatBot_DialogFlow API
Nao-Robot Double Tap to perform action.
Publication-ready NN-architecture schematics.
This is the code for sending email using Node.js
Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
Contains Jupyter Notebooks of stuff I am working on.
Source code of mini project for my fifth semester B.tech
🪐 End-to-end NLP workflows from prototype to production