Coder Social home page Coder Social logo

cli-go's Introduction

Nhost

Nhost

Quickstart   •   Website   •   Docs   •   Blog   •   Twitter   •   Discord


Nhost is an open source Firebase alternative with GraphQL, built with the following things in mind:

  • Open Source
  • GraphQL
  • SQL
  • Great Developer Experience

Nhost consists of open source software:

Architecture of Nhost




Visit https://docs.nhost.io for the complete documentation.

Get Started

Option 1: Nhost Hosted Platform

  1. Sign in to Nhost.
  2. Create Nhost app.
  3. Done.

Option 2: Self-hosting

Since Nhost is 100% open source, you can self-host the whole Nhost stack. Check out the example docker-compose file to self-host Nhost.

Sign In and Make a Graphql Request

Install the @nhost/nhost-js package and start build your app:

import { NhostClient } from '@nhost/nhost-js'

const nhost = new NhostClient({
  subdomain: '<your-subdomain>',
  region: '<your-region>'
})

await nhost.auth.signIn({ email: '[email protected]', password: 'spaceX' })

await nhost.graphql.request(`{
  users {
    id
    displayName
    email
  }
}`)

Frontend Agnostic

Nhost is frontend agnostic, which means Nhost works with all frontend frameworks.

Resources

  • Start developing locally with the Nhost CLI

Nhost Clients

Integrations

Applications

Community ❤️

First and foremost: Star and watch this repository to stay up-to-date.

Also, follow Nhost on GitHub Discussions, our Blog, and on Twitter. You can chat with the team and other members on Discord and follow our tutorials and other video material at YouTube.

Nhost is Open Source

This repository, and most of our other open source projects, are licensed under the MIT license.

ROSS Index - Fastest Growing Open-Source Startups | Runa Capital

How to contribute

Here are some ways of contributing to making Nhost better:

Contributors

A table of avatars from the project's contributors

cli-go's People

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

cli-go's Issues

`nhost upgrade` uninstalls nhost

╭─[email protected] ~/code/nhost/console-next  ‹main›
╰─➤  sudo nhost upgrade
Password:
[INFO] [v1.0-internal-20] You have the latest version. Hurray!
╭─[email protected] ~/code/nhost/console-next  ‹main›
╰─➤  nhost -d
zsh: command not found: nhost

It might have been that I was already on v20 when I ran nhost upgrade. I'm not sure.

Unable to start second `nhost` project

I have one nhost project running and when I try to start a second nhost project I get the following error:

╭─[email protected] ~/code/nhost/testapp
╰─➤  nhost init -d                                                                                                                                                                                                                                                                                                                                                      1 ↵
[INFO] Initializing Nhost project in this directory
[DEBUG] Generating project configuration
[DEBUG] Saving project configuration
[INFO] Nhost backend successfully initialized
╭─[email protected] ~/code/nhost/testapp
╰─➤  nhost dev -d -p 1338
[INFO] Initializing environment
[DEBUG] [testapp] [prefix] Fetching containers
[DEBUG] Wrapping containers into environment
[DEBUG] Parsing project configuration
[DEBUG] [testapp_hasura] [container] Initializing configuration
[DEBUG] [testapp_auth] [container] Initializing configuration
[DEBUG] [testapp_storage] [container] Initializing configuration
[DEBUG] [testapp_mailhog] [container] Initializing configuration
[DEBUG] [testapp_postgres] [container] Initializing configuration
[DEBUG] [testapp_minio] [container] Initializing configuration
[INFO] First run takes longer, please be patient
[DEBUG] Preparing environment
[DEBUG] Configuring services
[DEBUG] Reading environment variables
[DEBUG] [data] Mounting: /.nhost/main
[DEBUG] [testapp] [network] Preparing
[DEBUG] [testapp] [network] Fetching
[DEBUG] [testapp] [network] Creating
[DEBUG] [testapp_hasura] [container] Creating
[DEBUG] [testapp_hasura] [container] Starting
[DEBUG] [testapp_postgres] [container] Creating
[DEBUG] [testapp_postgres] [container] Starting
[DEBUG] [testapp_auth] [container] Creating
[DEBUG] [testapp_auth] [container] Starting
[DEBUG] [testapp_storage] [container] Creating
[DEBUG] [testapp_storage] [container] Starting
[DEBUG] [testapp_mailhog] [container] Creating
[DEBUG] [testapp_mailhog] [container] Starting
[DEBUG] [testapp_mailhog] [container] Error response from daemon: driver failed programming external connectivity on endpoint testapp_mailhog (ca145f14441418699ebd4b35394202229ea047ebb23f10e5053d1216349b5b41): Bind for 127.0.0.1:1025 failed: port is already allocated
[ERROR] [testapp_mailhog] [container] Failed to run container
[WARNING] Please wait while we cleanup
[DEBUG] Shutting down running services
[DEBUG] [testapp_storage] [container] Stopping
[DEBUG] [testapp_hasura] [container] Stopping
[DEBUG] [testapp_mailhog] [container] Stopping
[DEBUG] [testapp_auth] [container] Stopping
[DEBUG] [testapp_postgres] [container] Stopping
[INFO] Cleanup complete. See you later, grasshopper!

Looks like a simple port config issue.

auth env vars

  1. AUTH_SERVER_URL should be set to http://localhost:${port}/v1/auth
  2. AUTH_CLIENT_URL should be configurable via config.yaml and should default to http://localhost:3000

Healthcheck fails after restarting `nhost dev`

The health checks for the services work the first time I run nhost dev. But if I stop (cmd+c) all services and restart nhost dev the health checks fail.

I have to do nhost down before running nhost dev to make the health checks work again.

functions don't work

import { Request, Response } from 'express';
import { getUser } from './utils/utils';

const handler = (req: Request, res: Response) => {
  console.log('test function');

  console.log(req.get('content-type'));
  console.log(req.method);
  console.log(process.env);

  res.status(200).send('OK');
};

export default handler;
curl http://localhost:1337/v1/functions/test

Result

2021/09/14 12:32:09 http: proxy error: unsupported protocol scheme ""

My Nhost version:

╰─➤  nhost version
[INFO] [amd64] [darwin] v1.0-internal-26
[INFO] You have the latest version. Hurray!

Websockets errors

2021/09/14 12:07:31 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:07:42 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:07:53 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:08:04 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:08:17 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:08:28 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:08:41 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin
2021/09/14 12:08:52 websocketproxy: couldn't upgrade websocket: request origin not allowed by Upgrader.CheckOrigin

image

I'm testing subscriptions via React / Apollo / Chrome

AUTH_SMTP_HOST

How can AUTH_SMTP_HOST=nhost_mailhog be even if smtp_host: mailhog is specificed in the config.yaml file?

req.query parameters does not work

Try this code:

module.exports = (req, res) => {
  console.log(req.query);

  const name = req.query.name;

  res.send(`Hello ${name}!`);
};

even if adding ?name=johan name is still undefined.

cli don't apply seed data on new branch

If I'm on a non-standard branch (ex new-branch) and there is no folder .nhost/new-branch and run nhost dev -d everything starts up correct and migrations/metadata are applied but seed data is not applied.

Fails to apply seed data on first run

[DEBUG] Prearing migrations and metadata
[DEBUG] Applying seeds on first run
[DEBUG] time="2021-08-30T17:28:53+02:00" level=fatal msg="--database-name flag is required"

[ERROR] Failed to apply seed data
[DEBUG] exit status 1
[WARNING] Please wait while we cleanup

Code:

	if firstRun && len(seed_files) > 0 {

		log.Debug("Applying seeds on first run")

		// apply seed data
		cmdArgs = []string{hasuraCLI, "seeds", "apply"}
		cmdArgs = append(cmdArgs, commandOptions...)

		execute = exec.Cmd{
			Path: hasuraCLI,
			Args: cmdArgs,
			Dir:  nhost.NHOST_DIR,
		}

Either we add the --database-name default statically, or skip the command altogether.

env vars in .env.development is not being loaded into Hasura nor Functions

Env vars in .env.development are not being loaded into Hasura nor Functions.

Here's an example with Hasura:

╰─➤  cat .env.development
WEBHOOK_SECRET=hejsansecret%
╭─[email protected] ~/code/nhost/console-next  ‹nhost-cli*›
╰─➤  docker exec -it nhost_hasura sh
# env | grep WEBHOOK
# env
HOSTNAME=aa6503b8460a
HASURA_GRAPHQL_NO_OF_RETRIES=20
HASURA_GRAPHQL_ENABLED_LOG_TYPES=startup, http-log, webhook-log, websocket-log, query-log
HOME=/root
HASURA_GRAPHQL_MIGRATIONS_SERVER_TIMEOUT=20
HASURA_GRAPHQL_SHOW_UPDATE_NOTIFICATION=false
HASURA_GRAPHQL_SERVER_PORT=9218
HASURA_GRAPHQL_CLI_ENVIRONMENT=server-on-docker
HASURA_GRAPHQL_ENABLE_CONSOLE=false
TERM=xterm
NHOST_FUNCTIONS=http://host.docker.internal:1337/functions
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=C.UTF-8
HASURA_GRAPHQL_UNAUTHORIZED_ROLE=public
HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:postgres@nhost_postgres:5895/postgres
HASURA_GRAPHQL_ADMIN_SECRET=hasura-admin-secret
LC_ALL=C.UTF-8
PWD=/
HASURA_GRAPHQL_JWT_SECRET={"type":"HS256", "key": "74198c675808c3a59d107f370a3352d8bdd54c20863d4bd3ffad54a9814a7089e84e8af1d104944954b9c64ec86df1301b2dae82a0695d16b79ec5a684ce702cb862da486b01f3caeb85b17134d62556baae55bace84da2057f285cffc0a4ed3c85c7f6c22510fb950202d829c138e52ea8bdafeae4d1dfd5c19c945ca9f109d"}

stopping `nhost` requires two `cmd+c`

When I'm running nhost dev -d I have to do cmd+c twice to stop the CLI.

First cmd+c seems to stop some watcher. Second cmd+c stops the CLI as usual:

^C[DEBUG] [watcher] Inactivated
^C[DEBUG] [proxy] http: Server closed
[WARNING] Please wait while we cleanup
[DEBUG] Shutting down running services
[DEBUG] [console-next_minio] [container] Stopping
[DEBUG] [console-next_auth] [container] Stopping
[DEBUG] [console-next_storage] [container] Stopping
[DEBUG] [console-next_hasura] [container] Stopping
[DEBUG] [console-next_mailhog] [container] Stopping
[DEBUG] [console-next_postgres] [container] Stopping
[INFO] Cleanup complete. See you later, grasshopper!

CLI login and tokens

Login

/custom/cli/login

Paramters:

  • email
  • password

Response:

  • id
  • token

Save id and token in ~/.nhost.json

user

/custom/cli/user

Parameters

  • id
  • token

Response

  • id
  • workspaces
    • id
    • name
      • apps
        • id
        • name
        • [app settings for hasura, auth, storage]
        • envVars
          • key
          • value

Good DX for git

Git branches

Postgres / Hasura does not play well with different git branches out of the box.

Imagine the following:

  1. Clone a repo with existing Nhost project
  2. [main] start nhost dev
  3. [main[ Postgres, Hasura, etc starts and applies migrations and metadata
  4. [new-branch] checkout [new-branch]
  5. [new-branch] Add table and edit some metadata
  6. [new-branch] commit changes
  7. [main] Goes back to main branch
  8. OUT OF SYNC

In step 7 when the developer goes back to main from new branch the database still has the changes from new branch. If the developer after step 8 does any change in Hasura, the Hasura CLI will save the metadata changes that were created in new branch.

Ideally, every git branch should have its own isolated database and the developer should seemingly be able to checkout different branches and the CLI should make sure that the database is up-to-date with whatever branch the developer is at.

Proposed solution

The CLI keeps track of the active branch and creates a separate folder inside .nhost for each branch. Ex:

.nhost/main/[db_data/custom/minio]
.nhost/branch-1/[db_data/custom/minio]
.nhost/branch-2/[db_data/custom/minio]

And whenever the developer switch branch the CLI should:

  • automatically switch Postgres database
  • update volumes for Hasura Auth (custom folder for email templates)
  • update Minio volume

Git pull

Imagine a developer work locally in the main branch and have nhost dev running. They then do git pull and their co-workers have updated some migrations and metadata. After the git pull the developer's postgres/hasura is out of sync. After git pull the developer has to run hasura migrate apply and hasura metadata apply to get their postgres/hasura in sync again.

This is however easy to forget.

Ideally the CLI should keep track of this and apply these migrations and metadata changes automatically after a git pull to make sure that the code (migrations/metadata) and the database/api (postgres/hasura) are always in sync.

Maybe this out-of-sync problem is not only isolated to git pull?

Possibly solution

CLI should run hasura migrate apply and hasura metadata apply automatically after git pull. Possibly by keeping internal information about the git log and run apply commands when the current commit changes.

Race Condition: `git checkout && git pull`

As detailed by @elitan:

  1. checkout new branch
  2. nhost CLI starts applying changes
  3. before nhost CLI finishes applying changes, do git pull and pull new migrations.
  4. nhost CLI does not pick up new changes because it was busy applying changes from step 1 but there are new migrations from step 3.

Originally posted by @elitan in #9 (comment)


This is a fundamental race condition. I spent the entire weekend 😪 trying to implement a reasonable solution. Reverted all my changes. Here's why:

  1. Hasura CLI needs to read the files it wants to apply with hasura migrate/metadata apply. Since, the user has done a git pull, those files have already been changed/removed.
  2. This is not a race condition that can be solved with sophistication from our end.

Potential solutions I implemented:

Sol 1. Build real-time activation channels for every service inside our code. When Hasura is performing migrations/metadata, no other resource/function should be allowed to trigger another set of migrations. This basically locks the environment. The post-pull trigger will have to wait for Hasura to not be occupied, i.e. be active.

Logical problem with this solution: The environment is technically active, because while performing health-checks, Hasura isn't performing migrations/metadata anyway. But auth and storage might be. Our post-pull trigger ends up launching migrations before Hasura even becomes active.

Sol 2. Used a queue channel which stores the number of times we want migrations to be run. Heck, just imagine a counter. By default we only run migrations once in our dev environment for now. On git checkout, the queue buffer is incremented by one, and migrations are performed again.

Logical problem with this solution: It doesn't matter how many number of times we tell the environment to run migrations, because by the time the CLI is finished performing even a single set of migrations+metadata, the local files have been changed by git. And at the end of the day, Hasura CLI needs access to those migration files inside nhost/migrations to apply them.

So, the Hasura CLI just goes on and applies the same migrations X number of times, where X is the queue buffer.

This is the same problem I mentioned over here: #9 (comment)

Sol 3. Put mutually exclusive locking on migrations/metadata operations. Basically, if the migrations are currently being performed, then lock the Hasura service inside our code, and don't let any other resource trigger migrations concurrently, avoiding conflicts, and maintaining the sequence of migrations.

Same problem with this solution, as before: By the time the loop increments, git has already changed local files. Hasura CLI can't apply migrations if they don't exist inside nhost/migrations now.

Sol 4. [This is also what I'd pushed in v1.0-internal-17] - Don't perform migrations on the immediate git pull. Let the environment fail with an error and the user must re-start the environment to apply the new migrations.

In short: User must wait for environment to restart after git checkout, before pulling/merging/fetching their code.

Support git checkout, followed by a 2 minute break, followed by git pull. But DO NOT support git checkout {{branch}} & git pull together.

Potential hazards: None! Most probably, the immediate git pull will be over in the middle of the health checks of the refreshed environment (as happened in ~50 times I tested this). Since Hasura isn't running at that time, no migrations will be applied and CLI will exit with a fatal error.

This is the safest solution in my opinion. Not just in terms of implementation from our end, but also for the user. User can also do nhost purge --data when in doubt, and rebuild the environment with new migrations.

I think I've spent far too much time on solving this race condition, than I should have. Almost the entire weekend.

P.S. - We must mention this race condition, somewhere in the documentation for the CLI, if we write one. Perhaps, somewhere in the "advanced features" section. This can some up again in the future. This is why I documented this in a separate issue.

Seed apply broken

[ERROR] [seeds] Failed to open: default
[DEBUG] open /Users/eli/code/nhost/console-next/nhost/seeds/default/default: no such file or directory
╭─[email protected] ~/code/nhost/console-next/nhost/seeds/default  ‹main*›
╰─➤  ls
1625413925960_locations_and_plans.sql
╭─[email protected] ~/code/nhost/console-next/nhost/seeds/default  ‹main*›
╰─➤  pwd
/Users/eli/code/nhost/console-next/nhost/seeds/default

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.