Coder Social home page Coder Social logo

abi / screenshot-to-code Goto Github PK

View Code? Open in Web Editor NEW
49.7K 49.7K 6.0K 1.83 MB

Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)

Home Page: https://screenshottocode.com

License: MIT License

JavaScript 0.96% HTML 0.64% TypeScript 41.30% CSS 0.81% Python 55.91% Shell 0.07% Dockerfile 0.31%

screenshot-to-code's Introduction

screenshot-to-code

A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI.

Youtube.Clone.mp4

Supported stacks:

  • HTML + Tailwind
  • React + Tailwind
  • Vue + Tailwind
  • Bootstrap
  • Ionic + Tailwind
  • SVG

Supported AI models:

  • GPT-4 Turbo (Apr 2024) - Best model
  • GPT-4 Vision (Nov 2023) - Good model that's better than GPT-4 Turbo on some inputs
  • Claude 3 Sonnet - Faster, and on par or better than GPT-4 vision for many inputs
  • DALL-E 3 for image generation

See the Examples section below for more demos.

We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype.

google in app quick 3

Learn more about video here.

Follow me on Twitter for updates.

🚀 Try It Out without no install

Try it live on the hosted version (paid).

🛠 Getting Started

The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API or an Anthropic key if you want to use Claude Sonnet, or for experimental video support.

Run the backend (I use Poetry for package management - pip install poetry if you don't have it):

cd backend
echo "OPENAI_API_KEY=sk-your-key" > .env
poetry install
poetry shell
poetry run uvicorn main:app --reload --port 7001

If you want to use Anthropic, add the ANTHROPIC_API_KEY to backend/.env with your API key from Anthropic.

Run the frontend:

cd frontend
yarn
yarn dev

Open http://localhost:5173 to use the app.

If you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in frontend/.env.local

For debugging purposes, if you don't want to waste GPT4-Vision credits, you can run the backend in mock mode (which streams a pre-recorded response):

MOCK=true poetry run uvicorn main:app --reload --port 7001

Docker

If you have Docker installed on your system, in the root directory, run:

echo "OPENAI_API_KEY=sk-your-key" > .env
docker-compose up -d --build

The app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.

🙋‍♂️ FAQs

  • I'm running into an error when setting up the backend. How can I fix it? Try this. If that still doesn't work, open an issue.
  • How do I get an OpenAI API key? See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md
  • How can I configure an OpenAI proxy? - If you're not able to access the OpenAI API directly (due to e.g. country restrictions), you can try a VPN or you can configure the OpenAI base URL to use a proxy: Set OPENAI_BASE_URL in the backend/.env or directly in the UI in the settings dialog. Make sure the URL has "v1" in the path so it should look like this: https://xxx.xxxxx.xxx/v1
  • How can I update the backend host that my front-end connects to? - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001
  • Seeing UTF-8 errors when running the backend? - On windows, open the .env file with notepad++, then go to Encoding and select UTF-8.
  • How can I provide feedback? For feedback, feature requests and bug reports, open an issue or ping me on Twitter.

📚 Examples

NYTimes

Original Replica
Screenshot 2023-11-20 at 12 54 03 PM Screenshot 2023-11-20 at 12 59 56 PM

Instagram page (with not Taylor Swift pics)

instagram.taylor.swift.take.1.mp4

Hacker News but it gets the colors wrong at first so we nudge it

hacker.news.with.edits.mp4

🌍 Hosted Version

🆕 Try it here (paid). Or see Getting Started for local install instructions to use with your own API keys.

"Buy Me A Coffee"

screenshot-to-code's People

Contributors

abi avatar alexlloyd0 avatar clean99 avatar dialmedu avatar eltociear avatar guspan-tanadi avatar hakeemmidan avatar jonathan-adly avatar justindhillon avatar kachbit avatar milseg avatar nothing1024 avatar ntsd avatar pranshugupta54 avatar samya123456 avatar steadily-worked avatar sweep-ai[bot] avatar vagusx avatar zweertsk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

screenshot-to-code's Issues

Investigate open source models

This project looks amazing!! Unfortunately I cannot self host it without paying for OpenAI models - is there a possibility that you could look into alternative open-source models utilizing ggml and/or llama.cpp? So far I have found LLaVa but I'm not sure if it'd work for this project.

Heeeelp! can not run

Heeeelp! I ran it locally, but on the website, "Error generating code. Check the Developer Console for details. Feel free to open a Github issue." The project can run on both the front and back ends, but when the back end runs, a message "The current project could not be installed: No file/folder found for package backend If you do not want to install the current project use --no-root ", I do not know whether it is relevant

can't run fronted because node version incompatible

I run the project on windows, and built a anaconda env with python 3.11, every essential requirements were installed, backend runs successfullly, however, when i try to run frontend and run yarn command, things come up with this:

yarn install v1.22.19
[1/5] Validating package.json...
error [email protected]: The engine "node" is incompatible with this module. Expected version ">=14.18.0". Got "6.10.3"
error Found incompatible module.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

when i try to pip upgrade node with version==14.18.0. it comes with:

ERROR: Could not find a version that satisfies the requirement node==14.18.0 (from versions: 0.9, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8, 0.9.9, 0.9.10, 0.9.11, 0.9.12, 0.9.13, 0.9.14, 0.9.15, 0.9.16, 0.9.17, 0.9.18, 0.9.18.1, 0.9.19, 0.9.20, 0.9.21, 0.9.22, 0.9.23, 0.9.24, 0.9.25, 0.9.26, 0.9.27, 0.9.28, 1.0, 1.1, 1.2, 1.2.1)
ERROR: No matching distribution found for node==14.18.0

how to solve this issue? pls help

Backend fails

Once I start generating I'm getting this on the backend side:

openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model gpt-4-vision-preview does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

I'm a plus user but still see this.

add a mobile preview as well

what a wonderful project. I gave it a try and was amazed. I wanted a mobile view also so I changed Preview.tsx to return 2 iframes like so:

  return (
    <div className="flex">
      <iframe
        title="Preview"
        srcDoc={throttledCode}
        className="border-[4px] border-black rounded-[20px] shadow-lg 
        transform scale-[0.8] origin-top-left w-[400px] h-[832px]"
      ></iframe>
      <iframe
        title="Preview"
        srcDoc={throttledCode}
        className="border-[4px] border-black rounded-[20px] shadow-lg 
        transform scale-[0.8] origin-top-left w-[1280px] h-[832px]"
      ></iframe>
    </div>
  );
}

image

Use a better HTML code formatter

When images are replaced, we run the code through BeautifulSoup which changes the code formatting to something much less nice than what GPT outputs. It doesn't format CSS and JS embedded within the HTML. Find and use a different HTML code formatter in image_generation.py

I tried pytidylib and it was no good: https://countergram.github.io/pytidylib

Or could just do it on the client-side (good libraries abound).

Install instructions get stuck at 'poetry install' step

gets stuck on this step of script: Installing the current project: backend (0.1.0)

PS C:\screenshot-to-code-main\backend> poetry install
Creating virtualenv backend-0vQsLdXv-py3.12 in C:\Users\milky\AppData\Local\pypoetry\Cache\virtualenvs
Installing dependencies from lock file

Package operations: 21 installs, 0 updates, 0 removals

• Installing certifi (2023.7.22)
• Installing h11 (0.14.0)
• Installing idna (3.4)
• Installing sniffio (1.3.0)
• Installing anyio (3.7.1)
• Installing colorama (0.4.6)
• Installing httpcore (1.0.2)
• Installing typing-extensions (4.8.0)
• Installing click (8.1.7)
• Installing distro (1.8.0)
• Installing httpx (0.25.1)
• Installing pydantic (1.10.13)
• Installing soupsieve (2.5)
• Installing starlette (0.27.0)
• Installing tqdm (4.66.1)
• Installing beautifulsoup4 (4.12.2)
• Installing fastapi (0.95.2)
• Installing openai (1.2.4)
• Installing python-dotenv (1.0.0)
• Installing uvicorn (0.24.0.post1)
• Installing websockets (12.0)

Installing the current project: backend (0.1.0)
The current project could not be installed: No file/folder found for package backend
If you do not want to install the current project use --no-root

Error generating code.

Error generating code. Check the Developer Console for details. Feel free to open a Github issue

What could be the problem, can you please tell me?

failed to run frontend code on Mac

➜  frontend git:(main) ✗ node -v 
v18.18.2
➜  frontend git:(main) ✗ yarn -v 
1.22.19
➜  frontend git:(main) ✗ rm -rf node_modules 
➜  frontend git:(main) ✗ yarn 
yarn install v1.22.19
warning ../../../../package.json: No license field
[1/5] 🔍  Validating package.json...
[2/5] 🔍  Resolving packages...
[3/5] 🚚  Fetching packages...
warning [email protected]: The engine "vscode" appears to be invalid.
[4/5] 🔗  Linking dependencies...
warning "react-hot-toast > [email protected]" has unmet peer dependency "csstype@^3.0.10".
warning " > [email protected]" has unmet peer dependency "@codemirror/language@^6.0.0".
warning " > [email protected]" has unmet peer dependency "@codemirror/state@^6.0.0".
warning " > [email protected]" has unmet peer dependency "@codemirror/view@^6.0.0".
[5/5] 🔨  Building fresh packages...
✨  Done in 1.66s.
➜  frontend git:(main) ✗ yarn dev 
yarn run v1.22.19
warning ../../../../package.json: No license field
$ vite
error when starting dev server:
Error: listen EADDRNOTAVAIL: address not available 198.18.0.217:5173
    at Server.setupListenHandle [as _listen2] (node:net:1800:21)
    at listenInCluster (node:net:1865:12)
    at GetAddrInfoReqWrap.doListen [as callback] (node:net:2014:7)
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:110:8)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
➜  frontend git:(main) ✗ uname -a
Darwin macbook-pro 23.2.0 Darwin Kernel Version 23.2.0: Thu Nov  9 06:29:23 PST 2023; root:xnu-10002.60.71.505.1~3/RELEASE_ARM64_T6000 arm64

Add a Dockerfile

so it's easier for people to get this up and running and then, running a bunch of different commands.

Deployed on the server, reporting errors through reverse proxy backend

Deployed in Google Cloud, accessed through Nginx reverse proxy. Front-end access is normal, but an error will be reported when generating.
error message:
iShot_2023-11-21_19 49 32

Connecting to backend @ ws://127.0.0.1:7001/generate-code
generateCode.ts:24 WebSocket connection to 'ws://127.0.0.1:7001/generate-code' failed:
generateCode @generateCode.ts:24
generateCode.ts:55 WebSocket error Event
(anonymous) @generateCode.ts:55
generateCode.ts:45 Connection closed 1006
generateCode.ts:47 WebSocket error code CloseEvent

My nginx reverse proxy configuration is as follows:
server {
listen 80;
server_name my.com;

location/{
proxy_pass http://localhost:5173;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

location /generate-code {
proxy_pass http://127.0.0.1:7001;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Headers *;
}
}
Please tell me how to solve it. It is a very good open source program. Thank you!

Rate limit reached for gpt-4-vision-preview in organization xxxxx

With only single request i give this error:

screenshot-to-code-backend-1   | INFO:     ('172.20.0.1', 49694) - "WebSocket /generate-code" [accepted]
screenshot-to-code-backend-1   | INFO:     connection open
screenshot-to-code-backend-1   | ERROR:    Exception in ASGI application
screenshot-to-code-backend-1   | Traceback (most recent call last):
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 247, in run_asgi
screenshot-to-code-backend-1   |     result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
screenshot-to-code-backend-1   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
screenshot-to-code-backend-1   |     return await self.app(scope, receive, send)
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 276, in __call__
screenshot-to-code-backend-1   |     await super().__call__(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 122, in __call__
screenshot-to-code-backend-1   |     await self.middleware_stack(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 149, in __call__
screenshot-to-code-backend-1   |     await self.app(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 75, in __call__
screenshot-to-code-backend-1   |     await self.app(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
screenshot-to-code-backend-1   |     raise exc
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
screenshot-to-code-backend-1   |     await self.app(scope, receive, sender)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
screenshot-to-code-backend-1   |     raise e
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
screenshot-to-code-backend-1   |     await self.app(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 718, in __call__
screenshot-to-code-backend-1   |     await route.handle(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 341, in handle
screenshot-to-code-backend-1   |     await self.app(scope, receive, send)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 82, in app
screenshot-to-code-backend-1   |     await func(session)
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 289, in app
screenshot-to-code-backend-1   |     await dependant.call(**values)
screenshot-to-code-backend-1   |   File "/app/main.py", line 117, in stream_code
screenshot-to-code-backend-1   |     completion = await stream_openai_response(
screenshot-to-code-backend-1   |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/app/llm.py", line 23, in stream_openai_response
screenshot-to-code-backend-1   |     completion = await client.chat.completions.create(**params)
screenshot-to-code-backend-1   |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1191, in create
screenshot-to-code-backend-1   |     return await self._post(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1474, in post
screenshot-to-code-backend-1   |     return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1275, in request
screenshot-to-code-backend-1   |     return await self._request(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1306, in _request
screenshot-to-code-backend-1   |     return await self._retry_request(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1356, in _retry_request
screenshot-to-code-backend-1   |     return await self._request(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1306, in _request
screenshot-to-code-backend-1   |     return await self._retry_request(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1356, in _retry_request
screenshot-to-code-backend-1   |     return await self._request(
screenshot-to-code-backend-1   |            ^^^^^^^^^^^^^^^^^^^^
screenshot-to-code-backend-1   |   File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1318, in _request
screenshot-to-code-backend-1   |     raise self._make_status_error_from_response(err.response) from None
screenshot-to-code-backend-1   | openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4-vision-preview in organization xxxxxxxx on tokens per min (TPM): Limit 10000, Used 6701, Requested 4497. Please try again in 7.188s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
screenshot-to-code-backend-1   | INFO:     connection closed

pre-hosted version throwing 'Connection closed 1006 ' error

Connecting to backend @ wss://backend-screenshot-to-code.onrender.com/generate-code
index-6038cf45.js:225 Connection closed 1006
index-6038cf45.js:225 WebSocket error code CloseEvent
(anonymous) @ index-6038cf45.js:225

API key is correct and has access and credits

Plans for an hosted/SaaS version?

Hi,

I was wondering if there are plans to make an hosted version/SaaS as I currently do not have access to the GPT4 Vision product :)

Facing the issue in frontend

Error generating code. Check the Developer Console for details. Feel free to open a Github issue
Error generating code. Check the Developer Console for details. Feel free to open a Github issue

Cannot install

Trying to follow your instructions, I went into the backend folder and tried poetry install.

❯ echo "OPENAI_API_KEY=sk-aaa...xxx" > .env
❯ poetry install
Creating virtualenv backend-dR2C48Xf-py3.11 in /home/antouank/.cache/pypoetry/virtualenvs
Installing dependencies from lock file

Package operations: 20 installs, 0 updates, 0 removals

  • Installing certifi (2023.7.22)
  • Installing h11 (0.14.0)
  • Installing idna (3.4)
  • Installing sniffio (1.3.0)
  • Installing anyio (3.7.1)
  • Installing httpcore (1.0.2)
  • Installing typing-extensions (4.8.0)
  • Installing click (8.1.7)
  • Installing distro (1.8.0)
  • Installing httpx (0.25.1)
  • Installing pydantic (1.10.13)
  • Installing soupsieve (2.5)
  • Installing starlette (0.27.0)
  • Installing tqdm (4.66.1)
  • Installing beautifulsoup4 (4.12.2)
  • Installing fastapi (0.95.2)
  • Installing openai (1.2.4)
  • Installing python-dotenv (1.0.0)
  • Installing uvicorn (0.24.0.post1)
  • Installing websockets (12.0)

Installing the current project: backend (0.1.0)
The current project could not be installed: [Errno 2] No such file or directory: '/home/antouank/_REPOS_/screenshot-to-code/backend/README.md'
If you do not want to install the current project use --no-root

I added a README but no difference really.
I also tried running it at the root folder as well, again it throws an error.

❯ touch README.md
❯ poetry install
Installing dependencies from lock file

No dependencies to install or update

Installing the current project: backend (0.1.0)
The current project could not be installed: No file/folder found for package backend
If you do not want to install the current project use --no-root
❯ cd ..
❯ poetry install

Poetry could not find a pyproject.toml file in /home/antouank/_REPOS_/screenshot-to-code or its parents

What am I missing?

( what is "poetry" anyway? )

Add a button to convert code to React and Vite

I think it'll work better to convert code to React or Vite using gpt4 turbo rather than generate the code in React or Vite in the first place.

But both approaches are worth experimenting with.

Access Denied: Model Not Found

I encountered an error when attempting to access the gpt-4-vision-preview model. The system responded with an error message indicating that the model either does not exist or is not accessible with my current permissions.

image

but I'm sure that I get access to the gtp-plus
image

And I've created the key correctly
image

Question!

its it possible to deploy this in google cloud?
I tried to run this in google computer engine but seems cant find to the websocket, it runs but can get tp generate code

Add color thief

and display the colors nicely when an image is added in the left sidebar.

Checklist
  • Create backend/color_extraction.py
  • Modify backend/main.py ! No changes made
  • Modify frontend/src/components/ImageUpload.tsx8bbee0b
  • Modify frontend/src/main.tsxcac791d

Flowchart

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.