Coder Social home page Coder Social logo

olirice / alembic_utils Goto Github PK

View Code? Open in Web Editor NEW
175.0 5.0 41.0 5.64 MB

An alembic/sqlalchemy extension for migrating sql views, functions, triggers, and policies

License: MIT License

Python 99.64% Mako 0.29% PLpgSQL 0.07%
alembic sqlalchemy migration postgresql postgres

alembic_utils's People

Contributors

adrianschneider94 avatar aetherunbound avatar aidos avatar andrewmwilson avatar dancardin avatar ddemidov avatar dimitris-flyr avatar eliasmistler avatar glef1x avatar jdimmerman avatar jochs avatar matt-mcfahn-bjss avatar mattinbits avatar nstrong-scw avatar olirice avatar pacanada avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

alembic_utils's Issues

Can't create migration for views that depend on other objects in the migration

When generating a migration for a view that depends on a new table, alembic_utils triggers a Postgres error as the dependency doesn't exist yet. This is caused by the code in alembic_utils/simulate.py trying to create the views to see if it can resolve a specific creation order.

E.g. I have an ORM table definition:

class _Foo(Model):
    __table_name__ = "foo"

    id = Column(Integer, primary_key=True)
    created_at = Column(DateTime, default=func.now())
    bar_id = Column(db.Integer, ForeignKey("bar.id"), nullable=False)

and a view that references that table:

from sqlalchemy.sql import select, func

numbered_foo = (
    select(_Foo).add_columns(
        func.row_number()
        .over(
            partition_by=_Foo.bar_id,
            order_by=[_Foo.created_at, _Foo.id],
        )
        .label("number")
    }
)

# registration with PGView("public", "numbered_foo", str(numbered_foo))

where both the table and the view are entirely new to the project.

Instead, we are forced to create two separate migration steps, one that creates the table, the other with the view, after which we can merge the two scripts.

Is there a hook that I can implement to at least create any missing entities inside of the nested transaction, or a way to register what entities the view depends on?

As you can see, I'm defining the view from a SQLAlchemy Select object, if PGView could accept SQLAlchemy selectables directly, dependency resolution could be simpler, based on introspection.

Add some kind of "exclude_module" feature for collect_instances

My coworkers and I are using collect_instances and have a situation where we need the ability to omit certain modules since they would attempt to import dev dependencies that are not installed in the production image.

What we've currently done is to have a modified version of collect_instances and walk_modules which adds a new parameter, exclude_modules and uses it like so:

# In src/alembic_utils/experimental/_collect_instances.py::walk_modules
# ...
                    # Example: elt.settings
                    module_import_path = str(module_path)[
                        len(str(top_path)) - len(top_module.__name__) :
                    ].replace(os.path.sep, ".")[:-3]

                    # Check if this is a module/submodule we want to exclude
                    if exclude_modules and any(
                        [module_import_path.startswith(x) for x in exclude_modules]
                    ):
                        continue

                    module = importlib.import_module(module_import_path)
                    yield module
# ...

We figured it's likely that someone else will need this functionality eventually, so I actually attempted to open a PR here myself to add the feature. However, it looks like you've got a whitelist configured for who can push branches up -- Hence this issue to suggest the feature.

EDIT: Closing, figured out how to make a PR to your repo.

Be compatible with `sqlalchemy_utils.create_view`?

Currently, there seems to be two independent ways of working with views:

  • this package
  • sqlalchemy_utils.create_view

They serve a slightly different purpose: one allows the view to be used on an ORM, the other allows views to be maintained declaratively with alembic.

They also have tradeoffs:

  • sqlalchemy_utils.create_view supports generic expressions, which makes it more generic
  • sqlalchemy_utils.create_view does not work with alembic

Which bears the question: wouldn't it make sense to support the sqlalchemy_utils construct?

Specifically, something like

# models.py
from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy import Column, Integer, String, select
from sqlalchemy_utils import create_view

Base = declarative_base()


class User(Base):
    __tablename__ = "users"
    id = Column(Integer, primary_key=True)
    name = Column(String(50))
    name1 = Column(String(50))


class MyView(Base):
    __table__ = create_view("my_view", select([User.name]).select_from(User.__table__), Base.metadata)

and use this package to track the MyView with alembic?

Request feature: Triggers

Hello

Thanks, for the great library. Now I can use alembic_utils for creating/updating Procedures and Views.

My request/question is can we extend this library for creating/updating Triggers ?

Many thanks

Function signature parsing in drop statement fails if param name includes "default"

parameters = [x[: len(x.lower().split("default")[0])] for x in parameters]

This breaks when when a parameter name includes the word "default"

e.g.

jsonb_array_elements(arr jsonb, default_arr jsonb = '[]')
from parse import parse
template = "{function_name}({parameters})"
result = parse(template, self.signature, case_sensitive=False)
function_name = result["function_name"]
parameters_str = result["parameters"].strip()
parameters: List[str] = parameters_str.split(",")

# HERE      
parameters = [x[: len(x.lower().split("default")[0])] for x in parameters]
# Output
# [['arr jsonb'], [' ', "_arr jsonb = '[]'"]] 
# Note: the leading space gets split into its own element

Create aggregate function

I'm trying to create an aggregate function to concat arrays even if they're empty, as described here

Running this directly on Postgres works

CREATE AGGREGATE array_accum (anyarray)
(
    sfunc = array_cat,
    stype = anyarray,
    initcond = '{}'
);  

But I don't know how to (or if it's even possible) do that with alembic_utils.

I tried adding this to my functions.py file:

ARRAY_ACCUM_DEFINITION = """
(
    sfunc = array_cat,
    stype = anyarray,
    initcond = '{}'
);  
"""

array_distinct = PGFunction(
    schema='public',
    signature='array_accum(anyarray)',
    definition=ARRAY_ACCUM_DEFINITION
)

But if fails because the SQL code outputted tries to create a normal function, instead of an aggregate one, and I couldn't find any references on how to create aggregate functions

[SQL: CREATE FUNCTION "public"."array_accum"(anyarray) (
    sfunc = array_cat,
    stype = anyarray,
    initcond = '{}'
)]
(Background on this error at: http://sqlalche.me/e/13/f405)

So what would be the right way to do this?

Thanks in advance, and thanks for making this library, is great to have things much cleaner

test schema resolution is correct

confirm that adding an entity under a schema e.g. public, causes other entities in that schema to be dropped if tehy don't have a local definition

Support various DBs

Thanks for adding py36 support, it works great!

At the moment I have added support for SQLite by myself (it's used for local dev) but we are going to add support for other DBs as well (MySQL, MSSQL, etc).

My current implementation, it's pretty straightforward
class DBView(PGView):
    """Add SQLite support"""

    @property
    def view_name(self): 
        view_name = self.signature
        if not is_conn_is_sqlite(db.engine.url):
            view_name = '{self.literal_schema}."{self.signature}"'
        return view_name

    def to_sql_statement_create(self) -> str:
        """Generates a SQL "create view" statement"""
        return sql_text(f"CREATE VIEW {self.view_name} AS {self.definition}")

    def to_sql_statement_drop(self) -> str:
        """Generates a SQL "drop view" statement"""
        return sql_text(f"DROP VIEW {self.view_name}")

    def to_sql_statement_create_or_replace(self) -> str:
        """Generates a SQL "create or replace view" statement"""
        return sql_text(f"CREATE OR REPLACE VIEW {self.view_name} AS {self.definition}")

    @classmethod
    def from_database(cls, connection, schema) -> List["DBView"]:
        """Get a list of all functions defined in the db"""
        sql = sql_text(
            f"""
                select
                    schemaname schema_name,
                    viewname view_name,
                    definition
                from
                    pg_views
                where
                    schemaname not in ('pg_catalog', 'information_schema')
                    and schemaname::text = '{schema}';
            """
        )
        is_sqlite = is_conn_is_sqlite(db.engine.url)
        if is_sqlite:
            sql = sql_text(
                "SELECT 'databand', name, sql FROM sqlite_master WHERE type = 'view';"
            )
        rows = connection.execute(sql).fetchall()
        db_views = []
        for x in rows:
            definition = x[2]
            if is_sqlite:
                create_view_prefix = f"CREATE VIEW {x[1]} AS "
                definition = x[2][len(create_view_prefix) :]
            db_views.append(DBView(x[0], x[1], definition))

        for view in db_views:
            assert view is not None

        return db_views

    def get_compare_identity_query(self) -> str:
        """Return SQL string that returns 1 row for existing DB object"""
        if is_conn_is_sqlite(db.engine.url):
            return "SELECT name view_name FROM sqlite_master WHERE type = 'view';"
        return super().get_compare_identity_query()

    def get_compare_definition_query(self) -> str:
        """Return SQL string that returns 1 row for existing DB object"""
        if is_conn_is_sqlite(db.engine.url):
            return "SELECT name AS view_name, sql AS definition FROM sqlite_master WHERE type = 'view';"
        return super().get_compare_definition_query()

    def get_definition_comparable(self, connection) -> Tuple:
        if is_conn_is_sqlite(db.engine.url):
            return (self.schema, self.view_name, self.definition)
        return super().get_definition_comparable(connection)

    def get_identity_comparable(self, connection) -> Tuple:
        if is_conn_is_sqlite(db.engine.url):
            return (self.schema, self.view_name)
        return super().get_identity_comparable(connection)

What do you think about it in general? Do you consider supporting various DBs?

PGPolicy doesn't work with DELETE

Emitted SQL matches the second pattern to be explicit about columns but the second statement type does not support deletes because deleting a single column makes no sense.

GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER }
    [,...] | ALL [ PRIVILEGES ] }
    ON { [ TABLE ] table_name [, ...]
         | ALL TABLES IN SCHEMA schema_name [, ...] }
    TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]

GRANT { { SELECT | INSERT | UPDATE | REFERENCES } ( column [, ...] )
    [,...] | ALL [ PRIVILEGES ] ( column [, ...] ) }
    ON [ TABLE ] table_name [, ...]
    TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]

Emitted:

GRANT DELETE ( col1, col2, col3 ) ON "xxx"."yyyy" TO "api_user"
sqlalchemy.exc.DatabaseError: (psycopg2.errors.InvalidGrantOperation) invalid privilege type DELETE for column    

PGGrantTable.from_database requires `grantor = CURRENT_USER`

and grantor = CURRENT_USER

More a question than an issue. I'm guessing there's good reason for it.

While creating the grant tables for my database, I ran into some trouble as some of my tables had different owners. I was able to resolve this by making them all owned by my migration user. I assume that is what this line in the documentation is explaining:

PGGrantTable requires the role name being used to generate migrations to match the role name that executes migrations.

Why is this constraint necessary?

Constraint Triggers

Hi,

Thank you for this great library. We are currently using alembic_utils for versioning our functions and triggers.

I have been trying to use PGTrigger to create/update constraint triggers but from looking at the docs and code I don't believe it supports them.

Is there any plan to extend the class to support them?

Thanks again

If create or replace fails for a view, drop and recreate?

quoted from @mfsjr in #4

I made an unfortunate discovery preparing to retest with v2.6.

Dropping a column from a view cannot be handled using "create or replace", which returns "ERROR: cannot drop columns from view", Postgres docs confirm this, and "alter view" doesn't do it either.

In other words, what "create or replace" actually does is laughably disconnected from the English language.

It appears the concern is breaking other views that could depend on the soon-to-be-dropped column, even if there are none (bad).

Dropping the view and then dispatching "create or replace" works. If there is a dependent view, the drop will fail (good).

So it seems that in order for view migrations to work in both directions, "create or replace view" must first drop the view in question. That will fail with a sensible message if there are dependent views and succeed otherwise.

Would that be possible?

AttributeError: module 'alembic.op' has no attribute 'create_function'

Hello

It seems that the docs is a little out of date for https://olirice.github.io/alembic_utils/examples/

I have this:

def upgrade():
public_to_upper_6fa0de = PGFunction(
schema="public",
signature="to_upper(some_text text)",
definition="""
returns text
as
$$ select upper(some_text) $$ language SQL;
"""
)

op.create_function(public_to_upper_6fa0de)

then got this:

AttributeError: module 'alembic.op' has no attribute 'create_function'

It should be:

op.replace_entity(public_to_upper_6fa0de)

Note: my alembic version is 1.4.3

Handle a scenario when the return type of a function changes.

Suppose you have a function whose return type is TABLE(account_id text, balance numeric). You change the function to return TABLE(bank_account text, balance numeric). Changing the return type of a function is not permitted in PG, one needs to drop and create it from scratch.

IMO, this should be detected and reflected in the generated migrations.

Thoughts?

(materialised) views are created before the tables

(This issue has been reported in #39. More details have been reported here)

When the database has (materialised) views, setting up the database throws an error because the underlying tables do not exist.

This issue is happening for alembic_utils versions >= 0.2.16 and it gives this error message:

File ".../lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "transaction" does not exist
LINE 26:         FROM transaction
                      ^
....

(Background on this error at: http://sqlalche.me/e/13/f405)

I tested the same schema with versions <= 0.2.15 and it works fine.

More details:

  • Does this issue still exist in the latest version alembic_utils==0.5.3?
    Yes, actually for any version >= 0.2.16 this issue exists.

  • Does this error when the migration is being generated, or when you attempt to apply the migrations to the database?
    It happens when I run alembic to generate the migrations.

  • PostgreSQL version?
    psql (PostgreSQL) 13.2 (Ubuntu 13.2-1.pgdg20.04+1)

  • Python version?
    Python 3.8.8

An example to reproduce the error:

Table definition:

class Transaction(Base, Table):
    """Table definition for transaction."""

    whse_code = Column(Integer, nullable=False, primary_key=True)
    tran_date = Column(Date, nullable=False, primary_key=True)
    dist_code = Column(String, nullable=False, primary_key=True)
    requisition_code = Column(String, nullable=False, primary_key=True)

    @classmethod
    def raw_record_lookup(cls, record):
        return {
            "whse_code": record["WHSE_CODE"],
            "tran_date": record["TRAN_DATE"],
            "dist_code": record["DIST_CODE"],
            "requisition_code": record["REQUISITION_CODE"],
        }

Then, in env.py (in alembic folder), I have put:

from pathlib import Path
from alembic_utils.pg_materialized_view import PGMaterializedView

replaceable_entities = [PGMaterializedView.from_path(Path('path/to/my_mat_view.sql'))]
register_entities(replaceable_entities)

where my_mat_view.sql has the SQL definition of the materialized view:

CREATE MATERIALIZED VIEW public.transactions_recent
AS
    SELECT
        transaction.whse_code,
        transaction.tran_date,
        transaction.dist_code,
        transaction.requisition_code
    FROM transaction
    WHERE
        transaction.tran_date >= date_trunc('month'::text, now() - '2 mons'::interval)
        AND transaction.tran_date <= now()
    ORDER BY transaction.tran_date
WITH DATA;

Can't create trigger on the table from the same migration

As title suggest, these seems to be a bug when creating trigger on a table that doesn't exist yet and is part of the same migration. In such case autogenerated create fails with error.

Example definition:

schema = 'public'

extension_uuid = PGExtension(
    schema=schema,
    signature='uuid-ossp',
)

function_updated_at_auto = PGFunction(
    schema=schema,
    signature="updated_at_auto()",
    definition="""
        RETURNS TRIGGER LANGUAGE plpgsql AS $$
        BEGIN
          NEW.updated_at = NOW();
          RETURN NEW;
        END;
        $$;
    """
)

test = sa.Table(
    "test",
    metadata,
    sa.Column("id", pg.UUID, nullable=False, server_default=sa.func.uuid_generate_v1mc(), primary_key=True),
    sa.Column("name", pg.TEXT, nullable=False),
    sa.Column("created_dt", pg.TIMESTAMP(timezone=True), nullable=False, server_default=sa.func.now()),
    sa.Column("updated_dt", pg.TIMESTAMP(timezone=True), nullable=False, server_default=sa.func.now()),
)

trigger_test = PGTrigger(
    schema=schema,
    signature=f"{terst.name}_updated_at_trigger",
    on_entity=f"{schema}.{test.name}",
    definition=f"""
        BEFORE UPDATE ON {schema}.{test.name}
        FOR EACH ROW EXECUTE PROCEDURE {schema}.updated_at_auto()
    """,
)

And registering this in env.py:

register_entities([
    extension_uuid,
    function_updated_at_auto,
    trigger_test
])

This results in error:

INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGTrigger PGTrigger: public.test_updated_at_trigger False public.test
...
sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.UndefinedTableError'>: relation "public.test" does not exist
[SQL: CREATE TRIGGER "test_updated_at_trigger" BEFORE UPDATE ON public.test FOR EACH ROW EXECUTE PROCEDURE public.updated_at_auto()]
(Background on this error at: https://sqlalche.me/e/14/f405)

Creating trigger works, however if table already exists. As a workaround the migration has to be broken up into two, first with all entities and second with only triggers attached to the tables. This seems like a bug and splitting to separate migrations should not be necessary.

Views referencing each other can't be created simultaneously

I can't seem to create multiple views, where one references a previous view, at the same time.

An minimal example of what I would like to define (at the same time, so both views should be generated within one migration):

first_view = PGView(
    schema="public",
    signature="first_view",
    definition="SELECT * FROM some_table;"
)
second_view = PGView(
    schema="public",
    signature="second_view",
    definition="SELECT * FROM first_view;"
)

I get the following error while trying to autogenerate the alembic migration: psycopg2.errors.UndefinedTable: relation "second_view" does not exist.
It seems that while comparing the existing views with the newly defined views, alembic actually attempts to execute the view definition, which fails for the second view as the first view is then not yet defined.

def simulate_entity(connection, entity):
"""Creates *entity* in the *dummy_schema* and self would be transformed into if it were created in the database"""
dummy_schema = "alembic_utils"
assert entity.schema == dummy_schema
cls = entity.__class__
adj_target = cls(dummy_schema, entity.signature, entity.definition)
connection.execute(f"drop schema if exists {dummy_schema} cascade")
connection.execute(f"create schema if not exists {dummy_schema}")
try:
connection.execute(adj_target.to_sql_statement_create())
yield
finally:
connection.execute(f"drop schema if exists {dummy_schema} cascade")

Migrations not auto generated for views

Hi,
We've recently upgraded from 0.2.10 to 0.7.2 and it seems like detecting changes doesn't happen anymore.
We tried the different versions in between, and no migrations are generated after 0.4.2.

Ability to avoid duplicate alembic comparator dispatch with `registered_entities`

Has some resemblance to #56, or at least covers the same code, but i think is a different problem.

I'm using https://github.com/schireson/pytest-alembic to test my migrations, which ends up reexecuting the env.py many times within a single process (in many cases, once per migration), which leads to the same comparator getting reregistered numerous times. It's doesn't break alembic, but it does get progressively slower.

Due to the way register_entities works (

@comparators.dispatch_for("schema")
), there's not a great way to hook it into alembic without this happening.

A random example of how this could be avoided through a change to alembic_utils, might look like

@dataclass
class Registry:
    entities: List

    def register_entities(self, ...):
         """I'm what is now `register_entities`."""

registry = Registry()
register_entities = registry.register_entities

@comparators.dispatch_for("schema")
def compare_registered_entities(...):
    ...

This suggestion is sort-of based on how sqlalchemy/alembic themselves do things, but there are other, probably less drastic options. For example, just defining the functions in such a way that i could make the call to comparators.dispatch_for myself.

PGFunction.from_database fails when schema contains a procedure

alembic_utils/pg_function.py's from_database method queries Postgres metadata for SQL definitions of existing functions to see if a PGFunction should be created or replaced in a migration. Unfortunately its SQL query is missing a critical clause and as a result it will attempt to parse a procedure as a PGFunction if a procedure exists in the schema.

Error (truncated):

File "/usr/local/lib/python3.7/site-packages/alembic_utils/pg_function.py", line 128, in from_database
db_functions = [PGFunction.from_sql(x[3]) for x in rows]
File "/usr/local/lib/python3.7/site-packages/alembic_utils/pg_function.py", line 128, in
db_functions = [PGFunction.from_sql(x[3]) for x in rows]
File "/usr/local/lib/python3.7/site-packages/alembic_utils/pg_function.py", line 42, in from_sql
raise SQLParseFailure(f'Failed to parse SQL into PGFunction """{sql}"""')
alembic_utils.exceptions.SQLParseFailure: Failed to parse SQL into PGFunction """CREATE OR REPLACE PROCEDURE [...] """

I believe the solution for this issue is to update the SQL query in from_database with an additional clause and p.prokind = 'f', i.e.

        with extension_functions as (
            select
                objid as extension_function_oid
            from
                pg_depend
            where
                -- depends on an extension
                deptype='e'
                -- is a proc/function
                and classid = 'pg_proc'::regclass
        )

        select
            n.nspname as function_schema,
            p.proname as function_name,
            pg_get_function_arguments(p.oid) as function_arguments,
            case
                when l.lanname = 'internal' then p.prosrc
                else pg_get_functiondef(p.oid)
            end as create_statement,
            t.typname as return_type,
            l.lanname as function_language
        from
            pg_proc p
            left join pg_namespace n on p.pronamespace = n.oid
            left join pg_language l on p.prolang = l.oid
            left join pg_type t on t.oid = p.prorettype
            left join extension_functions ef on p.oid = ef.extension_function_oid
        where
            n.nspname not in ('pg_catalog', 'information_schema')
            -- Filter out functions from extensions
            and ef.extension_function_oid is null
            and n.nspname::text = '{schema}'
            and p.prokind = 'f';

Escaped strings make `--autogenerate` always autogenerate

I produced a view a little something like this:

query = "SELECT concat('https://something/', column) FROM table"
some_view = PGView(schema="public", signature="example", definition=query)

Which generated a revision something like so

    public_something = PGView(
        schema="public",
        signature="something",
        definition="SELECT concat('https\\://something/', column) FROM table",
    )
    op.replace_entity(public_something)

Notably the \\: section, which seems like an erroneous double escape or something.

If I run this upgrade, and re --autogenerate, it generates a new revision with \\\\:.

If you remove the \\ generated in the first place, all seems well.

Better handling of quoted entities

Firstly, thanks for this amazing library. While integrating with our system we've hit a bit of a snag with the way quoted entities are handled.

In our case we have triggers that need to be quoted in order to match (not created by us, created by Hasura, so I'd rather handle them, if possible).

As you know, ReplaceableEntity switches the signatures to unquoted on the way in for both the entities and the objects reflected from the db. That means that when it comes to the drop stage in get_database_definition, the existing triggers aren't dropped and hence the entities are seen as new. The end result is that we end up with CreateOps for any quoted entities and DropOps for their corresponding db objects every time.

As a workaround, I'm thinking to subclass PGTrigger so that I can handle the quoted case a bit better, but I'd like to get your opinion on it in case you'd like me to submit a PR back to handle it generally (and if so, I'd rather approach it in your preferred way from the outset).

It seems like it would be ok to simply not coerce to the unquoted version in the constructor (of ReplaceableEntity, ideally) and then change to_variable_name to switch to unquoted first.

I'll try that locally and await your thoughts before submitting a PR.

Thanks again.

(materialised) views are created before the tables

When the database has (materialised) views, setting up the database throws an error because the underlying tables do not exist.

This issue is happening for alembic_utils versions >= 0.2.16 and it gives this error message:

File ".../lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "my_table" does not exist
LINE 26:         FROM my_table
                      ^
....

(Background on this error at: http://sqlalche.me/e/13/f405)

I tested the same schema with versions <= 0.2.15 and it works fine.

PGFunction does not work < pg 10 (min officially supported version is 11)

Received via email from remi.trosset:

I am using alembic_utils for a while and i had a fatal bug on the last release.
When i call the function PGFunction.from_database, i’ve got this error :
{ProgrammingError}(psycopg2.errors.UndefinedColumn) column p.prokind does not exist

I didn’t have this error in the previous version.

I’m using a database PG version 10.

Cannot update view

Hi,

I am new to Alembic but seem to be able to use it, so far.

No problem getting the first migration for creating the view, ran update to head and it was created, no problems.

When I added one field to the view and autogenerated a new migration was created, but it appears to creating/dropping instead of replacing:

def upgrade():
    # ### commands auto generated by Alembic - please adjust! ###
    public_user_display = PGView(
                schema="PUBLIC",
                signature="user_display",
                definition='select u.rowid, u.email, u.created_at, u.deleted_at \n        from users u'
            )

    op.create_entity(public_user_display)
    # ### end Alembic commands ###


def downgrade():
    # ### commands auto generated by Alembic - please adjust! ###
    public_user_display = PGView(
                schema="PUBLIC",
                signature="user_display",
                definition='# not required for op'
            )

    op.drop_entity(public_user_display)
    # ### end Alembic commands ###

Any ideas?

Proposed PGSchema object

Hello @olirice! I am back with another issue.

For my use case, I wanted to be able to manage schemas through alembic as well as all of the views, functions etc that I need. To do this, I have thrown together a PGSchema ReplaceableEntity that hooks into your system so I was wondering if this is something you'd be interested in adding to the package as a supported module? Obviously would need a bit of work to tidy it up, make it safer and ensure it's cross-db compatible but just thought I'd offer.

Basic implementation:

class PGSchema(ReplaceableEntity):

    type_ = "schema"

    def __init__(self, schema: str, exists: bool = True):
        super().__init__(
            schema="",
            signature=schema,
            definition=schema
        )
        self.exists = exists

    def to_sql_statement_create(self) -> sa.text:
        return sa.text(f"create schema {self.signature};")

    def to_sql_statement_drop(self, cascade=False) -> sa.text:
        return sa.text(f"drop schema {self.signature};")

    def to_sql_statement_create_or_replace(self) -> sa.text:
        raise NotImplementedError()

    @property
    def identity(self) -> str:
        return f"{self.__class__.__name__}: {self.signature} {self.exists}"

    def to_variable_name(self) -> str:
        return f"schema_{self.signature}"

    def render_self_for_migration(self, omit_definition=False) -> str:
        
        var_name = self.to_variable_name()
        class_name = self.__class__.__name__

        return f"""{var_name} = {class_name}(schema="{self.signature}")\n"""

    @classmethod
    def from_database(cls, sess, schema):
       
        return [PGSchema(schema=x[0]) for x in sess.execute(sa.text("""
            select
                schema_name
            from
                information_schema.schemata 
            where 
                schema_owner != 'rdsadmin' 
                and schema_name != 'public';
        """)).fetchall()]

    def get_database_definition(self, sess, dependencies = None) -> "PGSchema":
        
        exists = sess.execute(sa.text("""
            select exists(
                select 
                    1
                from
                    information_schema.schemata 
                where 
                    schema_owner != 'rdsadmin' 
                    and schema_name = :schema
            );
        """), {"schema": self.signature}).fetchone()[0]

        if exists:
            return self
        else:
            return PGSchema(schema=self.schema, exists=False)

[Question/discussion] Can register_entities only be called once during setup?

I am trying to grasp how to deal with modularized code that defines entities in different places. As far as I understand from the code one is supposed to call the register_entities function once, from the alembic env.py. Now I have my code split up in several modules, is there a way I can have each module register their own entities? Or should each module expose a list of entities, and then I need to import those in my env.py?

In the spirit of SQLAlchemy it would be nice to register an entity as instantiation and be done with it (for SQLAlchemy that means derive from Base). Perhaps it would be possible to add a similar mechanism that registers an entity on each subclass instantiation, e.g. using the init_subclass mechanism.

Hooks before/after function creation

Hi,

this is merely a question than a real issue:
Is there an easy way to define hooks that get executed before a function is updated/created?

I use functions to define complex check constraints, but if I update the function, the check constraints won't be evaluated for all existing rows, but only afterwards on inserts/deletes.
To establish strong integrity I would like to drop those constraints before I update a function an recreate them afterwards.

Is there an easy approach?

Thanks and best regards,
Adrian Schneider

Handling functions defined by extensions; ignoring some functions

So my use case is that I import the uuid-ossp extension which defines a bunch of functions. When I then try to use alembic_utils to manage functions I define myself, and autogenerate migrations, a migration will have statements to the effect of remove all uuid-related functions, as definitions for these don't exist in the codebase of the app.

I think there could be two solutions for solving this:

  1. Make alembic_utils aware of extensions and functions defined by them. Add a method for managing extensions, and then extract a list of functions defined by that extension and add that to the list of entities.

  2. Add a method of ignoring some functions altogether. This would require a list of function signatures to compare against.

Sounds to me like those would be two different features - I could have a stab at them, but I'd like to know if there are any other ways you could think of for solving the problem.

Python 3.6 support

Looking forward to using alembic_utils in my project but:

$ pip install alembic_utils
...
Package 'alembic-utils' requires a different Python: 3.6.9 not in '>=3.7'

This issue might be helpful for those who also looking for py36 support so I decided to open it.

Could you add support for py36?

Make schema name optional

SQLAlchemy / Alembic does not set a default schema name for Postgres, nor does it require that objects set a schema. Without setting a schema, the Postgres search_path setting determines where objects are created.

However, alembic_utils requires that all objects it defines have a schema set. This makes it hard to support running a migration against an environment where the schema is set with SET schema TO ....

Misleading error message when updated function has syntax errors

Hello, I got hit with this error while on the async_engine branch, so not sure if this is applicable to master.
When migrating one of my functions, I made a syntax error in the new body, which resulted in the following error, which says it can not create the function because it already exists:

  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic_utils/replaceable_entity.py", line 286, in compare_registered_entities
    maybe_op = entity.get_required_migration_op(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic_utils/replaceable_entity.py", line 158, in get_required_migration_op
    db_def = self.get_database_definition(sess, dependencies=dependencies)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic_utils/replaceable_entity.py", line 97, in get_database_definition
    with simulate_entity(sess, self, dependencies) as sess:
  File "/usr/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic_utils/simulate.py", line 62, in simulate_entity
    sess.execute(entity.to_sql_statement_create())
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1689, in execute
    result = conn._execute_20(statement, params or {}, execution_options)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1611, in _execute_20
    return meth(self, args_10style, kwargs_10style, execution_options)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
    return connection._execute_clauseelement(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1478, in _execute_clauseelement
    ret = self._execute_context(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1842, in _execute_context
    self._handle_dbapi_exception(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2023, in _handle_dbapi_exception
    util.raise_(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
    raise exception
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1799, in _execute_context
    self.dialect.do_execute(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
    cursor.execute(statement, parameters)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 449, in execute
    self._adapt_connection.await_(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
    return current.driver.switch(awaitable)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
    value = await result
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 424, in _prepare_and_execute
    self._handle_exception(error)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 359, in _handle_exception
    self._adapt_connection._handle_exception(error)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 652, in _handle_exception
    raise translated_error from error
sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.DuplicateFunctionError'>: function "job_after_created" already exists with same argument types
[SQL: CREATE FUNCTION "public"."job_after_created"() returns trigger as
$$
begin
    perform pg_notify('rndflow', json_build_object(
        'project', new.project_id,
        'node',    new.node_id,
        'layer',   new.data_layer_id,
        'job',     new.id,
        'event',   'job_created'
        )::text);

    perform pg_notify('rndflow', json_build_object(
        'executor', coalesce(n.executor_id, p.executor_id)
        'event', job_created
        )::text)
    from nodes n, projects p
    where n.id = new.node_id
      and p.id = new.project_id
      and coalesce(n.executor_id, p.executor_id) is not null;

    return null;
end;
$$ language plpgsql]
(Background on this error at: https://sqlalche.me/e/14/f405)

The exception is thrown here in the alembic_utils code:

try:
sess.begin_nested()
sess.execute(entity.to_sql_statement_drop(cascade=True))
sess.execute(entity.to_sql_statement_create())
did_yield = True
yield sess
except:
if did_yield:
# the error came from user code after the yield
# so we can exit
raise
# Try again without the drop in case the drop raised
# a does not exist error
sess.rollback()
sess.begin_nested()
sess.execute(entity.to_sql_statement_create())
yield sess

Looks like my syntax error got caught by the try block, but somehow passed the did_yield guard.

register_entities | InvalidRequestError: A transaction is already begun for this connection

Hello,

I set up an extremely basic test view in my env.py file to try out this package but when I ran alembic revision --autogenerate, alembic_utils raised the following exception:

  File "/usr/local/lib/python3.7/site-packages/alembic_utils/replaceable_entity.py", line 251, in compare_registered_entities
    sess.rollback()
UnboundLocalError: local variable 'sess' referenced before assignment

I dug into the replaceable_entity file and found that the culprit for this issue is this block of code:

try:
    transaction = connection.begin()
    sess = Session(bind=connection)
    ordered_entities: List[T] = solve_resolution_order(sess, entities)
finally:
    sess.rollback()

The core of the problem appears to be the connection.begin() line, which raises the following silent exception, causing sess to be undeclared:

  File "/usr/local/lib/python3.7/site-packages/alembic_utils/replaceable_entity.py", line 247, in compare_registered_entities
    transaction = connection.begin()
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/future/engine.py", line 172, in begin
    return super(Connection, self).begin()
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 741, in begin
    "a transaction is already begun for this connection"
sqlalchemy.exc.InvalidRequestError: a transaction is already begun for this connection

I found that I was able to avoid this issue if I simply commented out the transaction definition as it also appears to not be used as part of the proceeding code. Any ideas? I am confused as to why this is not an issue that has been raised before. After disabling the line, I was able to run my revision script without issue and correctly generate my test view.

pip reference:

alembic                            1.6.4
alembic-utils                      0.7.2

Unable to GRANT SELECT on all columns (have to be explicit about which columns)

Thanks for this library! Searched but didn't see anything about this. It seems like to GRANT SELECT using PGGrantTable, we need to explicitly specify which columns to GRANT SELECT. This excludes statements like the following from being easily encoded with this class:

GRANT SELECT ON <table> to <user>;

Was just curious if this was something that had been encountered yet.

Remove psycopg2-binary dependency

Recently we added alembic_utils to our application. When I did so I started getting errors in our AWS lambda environment with psycopg2. I believe this stems from alembic_utils dependency on psycopg2-binary. I believe the order of installation of psycopg2 and psycopg2-binary matters in this case. From the psycopg2 docs, python packages should not have a dependency on psycopg2-binary. From https://www.psycopg.org/docs/install.html: "If you are the maintainer of a published package depending on psycopg2 you shouldn’t use psycopg2-binary as a module dependency"

Can you remove the psycopg2-binary from install-requires, and possibly move it to DEV_REQUIRES? From the looks of the source code, psycopg2 is never imported so its not a direct dependency. Alembic does not list psycopg2 or psycopg2-binary as a dependency either.

Creating auto migration slow for PGMaterializedView with data

I am creating a bunch of materialized views (and left the defaul with_data=True) and it was really slow (many minutes). On inspection, it seemed alembic was waiting for the database.

Of course the point of these materialized views is to speed up certain access patterns and the queries are quite slow, that's why I am materilizing them.

Now I want the migration to create views with data, but it seems like that is causing the automigrate creation to also run with date. After setting with_data=False they automigration is generated in a few seconds.

I suspect that during automigration the query is somehow run against the datse, perhaps to check for errors? Would it make sense to temporarily have with_data=False during auto migration generation?

Can't drop external objects

I'm currently trying to hack together a ReplaceableObject for creating and dropping composite types, and at least until it's cleaned up a bit I'm keeping it in our application repository.

For getting alembic_utils to create the objects of my type I just have to append my own entity types to the register_entities argument of register_entities. But when checking for whether it should drop anything, alembic_utils calls collect_subclasses again and only checks the returned entity types, meaning that my type will be ignored for drop checks.

subclasses = collect_subclasses(alembic_utils, ReplaceableEntity)
for entity_class in subclasses:
if entity_class not in allowed_entity_types:
continue

Why is it doing this double check there? I can't think of any reason for it not to just loop over allowed_entity_types, because it seems very strange to create objects of other types but not be able to drop them.

`replace_entity` with a SQL view does not allow insertion or re-ordering of columns in PostGres

Reproducible

  1. Create my_view.py:
first_view = PGView(
    schema="public",
    signature="first_view",
    definition="select 'Test' as "Foo" from information_schema.tables",
)
  1. Create migration and run
  2. Modify my_view.py:
first_view = PGView(
    schema="public",
    signature="first_view",
    definition="select 1 as "Bar", 'Test' as "Foo" from information_schema.tables",
)
  1. Create migration and run

Actual

Error

cannot change name of view column "Foo" to "Bar"
[SQL: CREATE OR REPLACE VIEW "public"

Expected

Success

Suggestion

It looks like this is caused by using SQL command CREATE OR REPLACE VIEW, which is known to only support appending columns, not inserting or re-ordering

CREATE OR REPLACE VIEW is similar, but if a view of the same name already exists, it is replaced. The new query must generate the same columns that were generated by the existing view query (that is, the same column names in the same order and with the same data types), but it may add additional columns to the end of the list. The calculations giving rise to the output columns may be completely different.

https://www.postgresql.org/docs/9.3/sql-createview.html

As such, it seems as though the view should be dropped and recreated

Integrate with create_view from sqlalchemy-utils?

Not sure which side should do the integration, but I think it would be really nice if this could add autogeneration for views created with sqlalchemy-utils' create_view.

(if this is something that sounds like a good idea, I'm willing to try to work on it, but I'll probably need some guidance)

PGFunction identity doesn't take source code into account

Looks like only the signature is taken into account when comparing functions, rather than the body of the function.

This isn't super easy to solve generically across all languages supported by postgres, but I think a good effort could be made on SQL & PL/pgSQL. How about this:

  • Retrieve the source for the function,
  • Normalise the code (either an existing SQL formatter, or cook something up in python that's good enough)
  • Compare the source.

The code will live in pg_proc.prosrc for PL/pgSQL, not sure about others.

What do you think?

Error in render_revert_entity when used with asyncpg engine

First of all, thank you for the great library, it is so much more fun than writing (and tracking) function definitions in alembic migration files!

I am using alembic_utils with asyncio sqlalchemy:

alembic==1.7.4
alembic-utils==0.7.3
SQLAlchemy==1.4.25

New functions are generated successfully, but when a function definition has been changed, alembic-utils fails with the following stack trace:

alembic revision -m 'test' --autogenerate
INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.ddl.postgresql] Detected sequence named 'projects_id_seq' as owned by integer column 'projects(id)', assuming SERIAL and omitting
INFO  [alembic_utils.depends] Resolving entities with no dependencies
INFO  [alembic_utils.depends] Resolving entities with dependencies. This may take a minute
INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGFunction PGFunction: public.workspace_is_visible( p_workspace_id integer, p_user_id integer )
INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGFunction PGFunction: public.workspace_permission_required( p_workspace_id integer, p_user_id integer, p_permission workspace_permission )
INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGFunction PGFunction: public.workspace_user_permission_list( p_workspace_id integer, p_user_id integer )
INFO  [alembic_utils.replaceable_entity] Detected ReplaceOp op for PGFunction PGFunction: public.workspace_user_permission_list( p_workspace_id integer, p_user_id integer )
INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGFunction PGFunction: public.project_is_visible( p_project_id integer, p_user_id integer )
INFO  [alembic_utils.replaceable_entity] Detecting required migration op PGFunction PGFunction: public.project_permission_required( p_project_id integer, p_user_id integer, p_permission project_permission )
Traceback (most recent call last):
  File "/home/demidov/work/rndflow/api/env/bin/alembic", line 8, in <module>
    sys.exit(main())
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/config.py", line 588, in main
    CommandLine(prog=prog).main(argv=argv)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/config.py", line 582, in main
    self.run_cmd(cfg, options)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/config.py", line 559, in run_cmd
    fn(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/command.py", line 234, in revision
    scripts = [script for script in revision_context.generate_scripts()]
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/command.py", line 234, in <listcomp>
    scripts = [script for script in revision_context.generate_scripts()]
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/autogenerate/api.py", line 600, in generate_scripts
    yield self._to_script(generated_revision)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/autogenerate/api.py", line 500, in _to_script
    render._render_python_into_templatevars(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/autogenerate/render.py", line 87, in _render_python_into_templatevars
    _render_cmd_body(downgrade_ops, autogen_context)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/autogenerate/render.py", line 109, in _render_cmd_body
    lines = render_op(autogen_context, op)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic/autogenerate/render.py", line 127, in render_op
    lines = util.to_list(renderer(autogen_context, op))
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/alembic_utils/reversible_op.py", line 155, in render_revert_entity
    with engine.connect() as connection:
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 419, in connect
    return super(Engine, self).connect()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3194, in connect
    return self._connection_cls(self, close_with_result=close_with_result)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
    else engine.raw_connection()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3273, in raw_connection
    return self._wrap_pool_connect(self.pool.connect, _connection)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3240, in _wrap_pool_connect
    return fn()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 310, in connect
    return _ConnectionFairy._checkout(self)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 868, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 476, in checkout
    rec = pool._do_get()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/impl.py", line 256, in _do_get
    return self._create_connection()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 256, in _create_connection
    return _ConnectionRecord(self)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 371, in __init__
    self.__connect()
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 666, in __connect
    pool.logger.debug("Error on connect(): %s", e)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
    compat.raise_(
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
    raise exception
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 661, in __connect
    self.dbapi_connection = connection = pool._invoke_creator(self)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 590, in connect
    return dialect.connect(*cargs, **cparams)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 584, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 748, in connect
    await_only(self.asyncpg.connect(*arg, **kw)),
  File "/home/demidov/work/rndflow/api/env/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 61, in await_only
    raise exc.MissingGreenlet(
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/14/xd2s)
sys:1: RuntimeWarning: coroutine 'connect' was never awaited

A temporary solution for me is to use alembic with the synchronous psycopg2 engine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.