arturbeg Goto Github PK
Name: Artur Begyan
Type: User
Company: Reya Labs
Bio: CTO @ Reya Labs
Twitter: arturbegyan
Location: London, United Kingdom
Blog: reya.network
Name: Artur Begyan
Type: User
Company: Reya Labs
Bio: CTO @ Reya Labs
Twitter: arturbegyan
Location: London, United Kingdom
Blog: reya.network
https://angular.io/guide/forms
Sample app pairing Angular 2 as frontend and Django as API
A curated list of awesome Python frameworks, libraries, software and resources
Speech bubble for Flutter
This is a simple casino build with solidity, truffle and react
Code for Stanford CS224u
seed project for angular, django, jwt
A simple example of a Django REST app + Angular2
A real-time reddit clone build just using Django (web-socket functionality is implemented via Django Channels). Archive repository migrated from BitBucket
JWT based authentication on angular4 application using django-rest-framework For CRUD operation
Experimental tools (R) for Big Data econometrics nowcasting and early estimates
Scaling Transformer architectures has been critical for pushing the frontiers of Language Modelling (LM), a problem central to Natural Language Processing (NLP) and Language Understanding. Although there is a direct positive relationship between the Transformer capacity and its LM performance, there are practical limitations which make training massive models impossible. These limitations come in the form of computation and memory costs which cannot be solely addressed by training on parallel devices. In this thesis, we investigate two approaches which can make Transformers more computationally and memory efficient. First, we introduce the Mixture-of-Experts (MoE) Transformer which can scale its capacity at a sub-linear computational cost. Second, we present a novel content-based sparse attention mechanism called Hierarchical Self Attention (HSA). We demonstrate that the MoE Transformer is capable of achieving lower test perplexity values than a vanilla Transformer model with higher computational demands. Language Modelling experiments, involving a Transformer which uses HSA in place of conventional attention, revealed that HSA can speed up attention computation by up to 330% at a negligible cost in model performance.
Pytorch library for fast transformer implementations
A chat app built on Flutter with firebase authentication and image sharing capability.
Facebook authentication demo in Flutter
A working Instagram clone written in Flutter using Firebase / Firestore
Building a WhatsApp Clone in Flutter.
Fractal is an ML-powered network of interconnected public chats that allows branching of chats into more focused “sub-chats”, thereby overcoming the problem of rapid conversation subject dilution and low engagement. Fractal aims to allow unacquainted individuals to spontaneously find and discuss niche topics of common interest in real-time.
Fractal is an ML-powered network of interconnected public chats that allows branching of chats into more focused “sub-chats”, thereby overcoming the problem of rapid conversation subject dilution and low engagement. Fractal aims to allow unacquainted individuals to spontaneously find and discuss niche topics of common interest in real-time.
This work investigates the forecasting relationship between a Google Trends indicator and real private consumption expenditure in the US. The indicator is constructed by applying Kernel Principal Component Analysis to consumption-related Google Trends search categories. The predictive performance of the indicator is evaluated in relation to two conventional survey-based indicators: the Conference Board Consumer Confidence Index and the University of Michigan Consumer Sentiment Index. The findings suggest that in both in-sample and out-of-sample nowcasting estimations the Google indicator performs better than survey-based predictors. The results also demonstrate that the predictive performance of survey-augmented models is no different than the power of a baseline autoregressive model that includes macroeconomic variables as controls. The results demonstrate an enormous potential of Google Trends data as a tool of unmatched value to forecasters of private consumption.
The HN reader app developed live on The Boring Flutter Development Show
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.