A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI. Now supporting Claude Sonnet 3.5 and GPT-4O!
Youtube.Clone.mp4
Supported stacks:
- HTML + Tailwind
- HTML + CSS
- React + Tailwind
- Vue + Tailwind
- Bootstrap
- Ionic + Tailwind
- SVG
Supported AI models:
- Claude Sonnet 3.5 - Best model!
- GPT-4O - also recommended!
- GPT-4 Turbo (Apr 2024)
- GPT-4 Vision (Nov 2023)
- Claude 3 Sonnet
- DALL-E 3 for image generation
See the Examples section below for more demos.
We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype.
Follow me on Twitter for updates.
Try it live on the hosted version (paid).
The app has a React/Vite frontend and a FastAPI backend.
Keys needed:
- OpenAI API key with access to GPT-4
- Anthropic key (optional) - only if you want to use Claude Sonnet, or for experimental video support.
Run the backend (I use Poetry for package management - pip install poetry
if you don't have it):
cd backend
echo "OPENAI_API_KEY=sk-your-key" > .env
poetry install
poetry shell
poetry run uvicorn main:app --reload --port 7001
If you want to use Anthropic, add ANTHROPIC_API_KEY
to backend/.env
. You can also set up the keys using the settings dialog on the front-end (click the gear icon after loading the frontend).
Run the frontend:
cd frontend
yarn
yarn dev
Open http://localhost:5173 to use the app.
If you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in frontend/.env.local
For debugging purposes, if you don't want to waste GPT4-Vision credits, you can run the backend in mock mode (which streams a pre-recorded response):
MOCK=true poetry run uvicorn main:app --reload --port 7001
If you have Docker installed on your system, in the root directory, run:
echo "OPENAI_API_KEY=sk-your-key" > .env
docker-compose up -d --build
The app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.
- I'm running into an error when setting up the backend. How can I fix it? Try this. If that still doesn't work, open an issue.
- How do I get an OpenAI API key? See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md
- How can I configure an OpenAI proxy? - If you're not able to access the OpenAI API directly (due to e.g. country restrictions), you can try a VPN or you can configure the OpenAI base URL to use a proxy: Set OPENAI_BASE_URL in the
backend/.env
or directly in the UI in the settings dialog. Make sure the URL has "v1" in the path so it should look like this:https://xxx.xxxxx.xxx/v1
- How can I update the backend host that my front-end connects to? - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001
- Seeing UTF-8 errors when running the backend? - On windows, open the .env file with notepad++, then go to Encoding and select UTF-8.
- How can I provide feedback? For feedback, feature requests and bug reports, open an issue or ping me on Twitter.
NYTimes
Original | Replica |
---|---|
Instagram page (with not Taylor Swift pics)
instagram.taylor.swift.take.1.mp4
Hacker News but it gets the colors wrong at first so we nudge it
hacker.news.with.edits.mp4
π Try it here (paid). Or see Getting Started for local install instructions to use with your own API keys.
screenshot-to-code's People
Forkers
tmanager22 tfsheol hirajanwin dylantullberg hadifarhat yasin-kilic leroyg charmandercha aidev-pt hvmediavn ralf12358 chadsmith haderman bemail2017 eramax tabdon bradydavid hasanbeker2 wileyc427 some19ice puligriff-steve ih-hugh sorobby jasonshawn inhumantsar tranduybau sanvu88 kiendaotac hoanx0601 cong85010 maheskrishnan ducphamluong zavier-sanders phumaster tranbrandon1233 liamdgray shivamsinha15 dhnanjay jaytoday ucchino tluyben thejonan 666devil666hell stefanofrusone hophuoctrung jianfang-kso antonga23 topaztee zweertsk brianteeman faraday fordnox metalbud dewmal taneron hbcbh1999 aizarra ace4port rbuysse duongthanhthai jonathan-adly deifos thundree billbarnhill traviscooper sjafferi h-h-e wovika nguyendangthuan rbritom aladler cessatl kokizzu dinodefend otey247 codeaudit tomcharlesosman anddreyko moxmoussa keko950 begonegoods estebantcs whatif-dev emtv thomascloarec enoai walterrojas oldjar07 hanimedialab toanluongnhu1709 superoldman96 davecs1 jrakkar baifengbai gdmec13 dip-kosuke-igami danilovukosich bigbigpeng3 williamdeve harriskingscreenshot-to-code's Issues
Cannot install
Trying to follow your instructions, I went into the backend
folder and tried poetry install
.
β― echo "OPENAI_API_KEY=sk-aaa...xxx" > .env
β― poetry install
Creating virtualenv backend-dR2C48Xf-py3.11 in /home/antouank/.cache/pypoetry/virtualenvs
Installing dependencies from lock file
Package operations: 20 installs, 0 updates, 0 removals
β’ Installing certifi (2023.7.22)
β’ Installing h11 (0.14.0)
β’ Installing idna (3.4)
β’ Installing sniffio (1.3.0)
β’ Installing anyio (3.7.1)
β’ Installing httpcore (1.0.2)
β’ Installing typing-extensions (4.8.0)
β’ Installing click (8.1.7)
β’ Installing distro (1.8.0)
β’ Installing httpx (0.25.1)
β’ Installing pydantic (1.10.13)
β’ Installing soupsieve (2.5)
β’ Installing starlette (0.27.0)
β’ Installing tqdm (4.66.1)
β’ Installing beautifulsoup4 (4.12.2)
β’ Installing fastapi (0.95.2)
β’ Installing openai (1.2.4)
β’ Installing python-dotenv (1.0.0)
β’ Installing uvicorn (0.24.0.post1)
β’ Installing websockets (12.0)
Installing the current project: backend (0.1.0)
The current project could not be installed: [Errno 2] No such file or directory: '/home/antouank/_REPOS_/screenshot-to-code/backend/README.md'
If you do not want to install the current project use --no-root
I added a README but no difference really.
I also tried running it at the root folder as well, again it throws an error.
β― touch README.md
β― poetry install
Installing dependencies from lock file
No dependencies to install or update
Installing the current project: backend (0.1.0)
The current project could not be installed: No file/folder found for package backend
If you do not want to install the current project use --no-root
β― cd ..
β― poetry install
Poetry could not find a pyproject.toml file in /home/antouank/_REPOS_/screenshot-to-code or its parents
What am I missing?
( what is "poetry" anyway? )
Remove Nine-dash line image from repository description
WebSocket error code, After uploading the screenshot.
I don't have Open API KEY, Is there any way I can experience it?
WebSocket error code
websocket closed 1006
Screenshottocode
Support Azure endpoints
What's happening here?
Auto-improve the generation by sending a snapshot of current current generation
Ask GPT to find the differences between the original reference image and the generated html page and then improve by accounting for the differences.
Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Hello, thank you for your contribution, I am having the above problem, can you help me?
File "<frozen codecs>", line 322, in decode UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
I have to keep using VPN and need help with this problem
Rate limit reached for gpt-4-vision-preview in organization xxxxx
With only single request i give this error:
screenshot-to-code-backend-1 | INFO: ('172.20.0.1', 49694) - "WebSocket /generate-code" [accepted]
screenshot-to-code-backend-1 | INFO: connection open
screenshot-to-code-backend-1 | ERROR: Exception in ASGI application
screenshot-to-code-backend-1 | Traceback (most recent call last):
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 247, in run_asgi
screenshot-to-code-backend-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
screenshot-to-code-backend-1 | return await self.app(scope, receive, send)
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 276, in __call__
screenshot-to-code-backend-1 | await super().__call__(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 122, in __call__
screenshot-to-code-backend-1 | await self.middleware_stack(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 149, in __call__
screenshot-to-code-backend-1 | await self.app(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 75, in __call__
screenshot-to-code-backend-1 | await self.app(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
screenshot-to-code-backend-1 | raise exc
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
screenshot-to-code-backend-1 | await self.app(scope, receive, sender)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
screenshot-to-code-backend-1 | raise e
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
screenshot-to-code-backend-1 | await self.app(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 718, in __call__
screenshot-to-code-backend-1 | await route.handle(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 341, in handle
screenshot-to-code-backend-1 | await self.app(scope, receive, send)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 82, in app
screenshot-to-code-backend-1 | await func(session)
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 289, in app
screenshot-to-code-backend-1 | await dependant.call(**values)
screenshot-to-code-backend-1 | File "/app/main.py", line 117, in stream_code
screenshot-to-code-backend-1 | completion = await stream_openai_response(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/app/llm.py", line 23, in stream_openai_response
screenshot-to-code-backend-1 | completion = await client.chat.completions.create(**params)
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1191, in create
screenshot-to-code-backend-1 | return await self._post(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1474, in post
screenshot-to-code-backend-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1275, in request
screenshot-to-code-backend-1 | return await self._request(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1306, in _request
screenshot-to-code-backend-1 | return await self._retry_request(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1356, in _retry_request
screenshot-to-code-backend-1 | return await self._request(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1306, in _request
screenshot-to-code-backend-1 | return await self._retry_request(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1356, in _retry_request
screenshot-to-code-backend-1 | return await self._request(
screenshot-to-code-backend-1 | ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1318, in _request
screenshot-to-code-backend-1 | raise self._make_status_error_from_response(err.response) from None
screenshot-to-code-backend-1 | openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4-vision-preview in organization xxxxxxxx on tokens per min (TPM): Limit 10000, Used 6701, Requested 4497. Please try again in 7.188s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
screenshot-to-code-backend-1 | INFO: connection closed
Use a better HTML code formatter
When images are replaced, we run the code through BeautifulSoup which changes the code formatting to something much less nice than what GPT outputs. It doesn't format CSS and JS embedded within the HTML. Find and use a different HTML code formatter in image_generation.py
I tried pytidylib and it was no good: https://countergram.github.io/pytidylib
Or could just do it on the client-side (good libraries abound).
Support uploading pdfs
Hi, would that be possible? Thanks
Add reset button in error state and stop button when coding
Now when we finish uploading an image, there is no way to upload a new image unless users open a new window. Considering adding an escape capsule or embedding the upload new image feature after users upload an image.
pre-hosted version throwing 'Connection closed 1006 ' error
Connecting to backend @ wss://backend-screenshot-to-code.onrender.com/generate-code
index-6038cf45.js:225 Connection closed 1006
index-6038cf45.js:225 WebSocket error code CloseEvent
(anonymous) @ index-6038cf45.js:225
API key is correct and has access and credits
Install instructions get stuck at 'poetry install' step
gets stuck on this step of script: Installing the current project: backend (0.1.0)
PS C:\screenshot-to-code-main\backend> poetry install
Creating virtualenv backend-0vQsLdXv-py3.12 in C:\Users\milky\AppData\Local\pypoetry\Cache\virtualenvs
Installing dependencies from lock file
Package operations: 21 installs, 0 updates, 0 removals
β’ Installing certifi (2023.7.22)
β’ Installing h11 (0.14.0)
β’ Installing idna (3.4)
β’ Installing sniffio (1.3.0)
β’ Installing anyio (3.7.1)
β’ Installing colorama (0.4.6)
β’ Installing httpcore (1.0.2)
β’ Installing typing-extensions (4.8.0)
β’ Installing click (8.1.7)
β’ Installing distro (1.8.0)
β’ Installing httpx (0.25.1)
β’ Installing pydantic (1.10.13)
β’ Installing soupsieve (2.5)
β’ Installing starlette (0.27.0)
β’ Installing tqdm (4.66.1)
β’ Installing beautifulsoup4 (4.12.2)
β’ Installing fastapi (0.95.2)
β’ Installing openai (1.2.4)
β’ Installing python-dotenv (1.0.0)
β’ Installing uvicorn (0.24.0.post1)
β’ Installing websockets (12.0)
Installing the current project: backend (0.1.0)
The current project could not be installed: No file/folder found for package backend
If you do not want to install the current project use --no-root
Support OPENAI_ API_ BASE for proxy URLs
How to add OPENAI_ API_ BASE code to use other proxy keysοΌ
Facing the issue in frontend
Error generating code. Check the Developer Console for details. Feel free to open a Github issue
Error generating code. Check the Developer Console for details. Feel free to open a Github issue
Error generating code.
Error generating code. Check the Developer Console for details. Feel free to open a Github issue
What could be the problem, can you please tell me?
My key has always been in normal use, but in this will say no organization
Error generating code: Failed at backend server stating β The model `gpt-4-vision-preview` does not exist
Session quota exceeded. Please upgrade your plan. Disabling .
Plans for an hosted/SaaS version?
Hi,
I was wondering if there are plans to make an hosted version/SaaS as I currently do not have access to the GPT4 Vision product :)
Add a Dockerfile
so it's easier for people to get this up and running and then, running a bunch of different commands.
Deployed on the server, reporting errors through reverse proxy backend
Deployed in Google Cloud, accessed through Nginx reverse proxy. Front-end access is normal, but an error will be reported when generating.
error message:
Connecting to backend @ ws://127.0.0.1:7001/generate-code
generateCode.ts:24 WebSocket connection to 'ws://127.0.0.1:7001/generate-code' failed:
generateCode @generateCode.ts:24
generateCode.ts:55 WebSocket error Event
(anonymous) @generateCode.ts:55
generateCode.ts:45 Connection closed 1006
generateCode.ts:47 WebSocket error code CloseEvent
My nginx reverse proxy configuration is as follows:
server {
listen 80;
server_name my.com;
location/{
proxy_pass http://localhost:5173;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /generate-code {
proxy_pass http://127.0.0.1:7001;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Headers *;
}
}
Please tell me how to solve it. It is a very good open source program. Thank you!
Investigate open source models
Serverless support
Please support serverless services like Vercel. π
Access Denied: Model Not Found
Support OpenRouter
Feature Request: Javascript Generation
It will be amazing if the model could also generate Javascript for some interactions.
Add color thief
Add a button to convert code to React and Vite
I think it'll work better to convert code to React or Vite using gpt4 turbo rather than generate the code in React or Vite in the first place.
But both approaches are worth experimenting with.
Please add ENV OPEN_AI_BASE support to the docker build
subj
add a mobile preview as well
what a wonderful project. I gave it a try and was amazed. I wanted a mobile view also so I changed Preview.tsx to return 2 iframes like so:
return (
<div className="flex">
<iframe
title="Preview"
srcDoc={throttledCode}
className="border-[4px] border-black rounded-[20px] shadow-lg
transform scale-[0.8] origin-top-left w-[400px] h-[832px]"
></iframe>
<iframe
title="Preview"
srcDoc={throttledCode}
className="border-[4px] border-black rounded-[20px] shadow-lg
transform scale-[0.8] origin-top-left w-[1280px] h-[832px]"
></iframe>
</div>
);
}
Whether the third-party agent of openai is supported?
I would like to ask whether a third-party agent supports openai. If yes, please explain how to use it. thinks
Persist settings config in local storage
Is there a good library to do this with minimal code changes?
How do I change the openai api url?
How do I change the openai api url?
failed to run frontend code on Mac
β frontend git:(main) β node -v
v18.18.2
β frontend git:(main) β yarn -v
1.22.19
β frontend git:(main) β rm -rf node_modules
β frontend git:(main) β yarn
yarn install v1.22.19
warning ../../../../package.json: No license field
[1/5] π Validating package.json...
[2/5] π Resolving packages...
[3/5] π Fetching packages...
warning [email protected]: The engine "vscode" appears to be invalid.
[4/5] π Linking dependencies...
warning "react-hot-toast > [email protected]" has unmet peer dependency "csstype@^3.0.10".
warning " > [email protected]" has unmet peer dependency "@codemirror/language@^6.0.0".
warning " > [email protected]" has unmet peer dependency "@codemirror/state@^6.0.0".
warning " > [email protected]" has unmet peer dependency "@codemirror/view@^6.0.0".
[5/5] π¨ Building fresh packages...
β¨ Done in 1.66s.
β frontend git:(main) β yarn dev
yarn run v1.22.19
warning ../../../../package.json: No license field
$ vite
error when starting dev server:
Error: listen EADDRNOTAVAIL: address not available 198.18.0.217:5173
at Server.setupListenHandle [as _listen2] (node:net:1800:21)
at listenInCluster (node:net:1865:12)
at GetAddrInfoReqWrap.doListen [as callback] (node:net:2014:7)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:110:8)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
β frontend git:(main) β uname -a
Darwin macbook-pro 23.2.0 Darwin Kernel Version 23.2.0: Thu Nov 9 06:29:23 PST 2023; root:xnu-10002.60.71.505.1~3/RELEASE_ARM64_T6000 arm64
Error generating code. Check the Developer Console for details. Feel free to open a Github issue
Heeeelp! can not run
Heeeelp! I ran it locally, but on the website, "Error generating code. Check the Developer Console for details. Feel free to open a Github issue." The project can run on both the front and back ends, but when the back end runs, a message "The current project could not be installed: No file/folder found for package backend If you do not want to install the current project use --no-root ", I do not know whether it is relevant
can't run fronted because node version incompatible
I run the project on windows, and built a anaconda env with python 3.11, every essential requirements were installed, backend runs successfullly, however, when i try to run frontend and run yarn command, things come up with thisοΌ
yarn install v1.22.19
[1/5] Validating package.json...
error [email protected]: The engine "node" is incompatible with this module. Expected version ">=14.18.0". Got "6.10.3"
error Found incompatible module.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
when i try to pip upgrade node with version==14.18.0. it comes with:
ERROR: Could not find a version that satisfies the requirement node==14.18.0 (from versions: 0.9, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8, 0.9.9, 0.9.10, 0.9.11, 0.9.12, 0.9.13, 0.9.14, 0.9.15, 0.9.16, 0.9.17, 0.9.18, 0.9.18.1, 0.9.19, 0.9.20, 0.9.21, 0.9.22, 0.9.23, 0.9.24, 0.9.25, 0.9.26, 0.9.27, 0.9.28, 1.0, 1.1, 1.2, 1.2.1)
ERROR: No matching distribution found for node==14.18.0
how to solve this issue? pls help
Backend fails
Once I start generating I'm getting this on the backend side:
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model gpt-4-vision-preview
does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
I'm a plus user but still see this.
Make the preview windows draggable on both ends to get to different screen sizes
WebSocket error code
"Your demonstration website has the same error, please take a look."
Allow the user to edit the code in preview
And show live updates in the preview when the code is edited.
Allow user to open the preview website in a new window
support other language code
Can you support uploading phone sized images to generate mobile code, such as kotlin or swift code
Question!
its it possible to deploy this in google cloud?
I tried to run this in google computer engine but seems cant find to the websocket, it runs but can get tp generate code
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.