Coder Social home page Coder Social logo

sql-migrate's Introduction

sql-migrate

SQL Schema migration tool for Go. Based on gorp and goose.

Test Go Reference

Features

  • Usable as a CLI tool or as a library
  • Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through gorp)
  • Can embed migrations into your application
  • Migrations are defined with SQL for full flexibility
  • Atomic migrations
  • Up/down migrations to allow rollback
  • Supports multiple database types in one project
  • Works great with other libraries such as sqlx
  • Supported on go1.13+

Installation

To install the library and command line program, use the following:

go get -v github.com/rubenv/sql-migrate/...

For Go version from 1.18, use:

go install github.com/rubenv/sql-migrate/...@latest

Usage

As a standalone tool

$ sql-migrate --help
usage: sql-migrate [--version] [--help] <command> [<args>]

Available commands are:
    down      Undo a database migration
    new       Create a new migration
    redo      Reapply the last migration
    status    Show migration status
    up        Migrates the database to the most recent version available

Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments:

development:
  dialect: sqlite3
  datasource: test.db
  dir: migrations/sqlite3

production:
  dialect: postgres
  datasource: dbname=myapp sslmode=disable
  dir: migrations/postgres
  table: migrations

(See more examples for different set ups here)

Also one can obtain env variables in datasource field via os.ExpandEnv embedded call for the field. This may be useful if one doesn't want to store credentials in file:

production:
  dialect: postgres
  datasource: host=prodhost dbname=proddb user=${DB_USER} password=${DB_PASSWORD} sslmode=require
  dir: migrations
  table: migrations

The table setting is optional and will default to gorp_migrations.

The environment that will be used can be specified with the -env flag (defaults to development).

Use the --help flag in combination with any of the commands to get an overview of its usage:

$ sql-migrate up --help
Usage: sql-migrate up [options] ...

  Migrates the database to the most recent version available.

Options:

  -config=dbconfig.yml   Configuration file to use.
  -env="development"     Environment.
  -limit=0               Limit the number of migrations (0 = unlimited).
  -version               Run migrate up to a specific version, eg: the version number of migration 1_initial.sql is 1.
  -dryrun                Don't apply migrations, just print them.

The new command creates a new empty migration template using the following pattern <current time>-<name>.sql.

The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter, and the -version parameter. Note -version has higher priority than -limit if you try to use them both.

The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.

Use the status command to see the state of the applied migrations:

$ sql-migrate status
+---------------+-----------------------------------------+
|   MIGRATION   |                 APPLIED                 |
+---------------+-----------------------------------------+
| 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
| 2_record.sql  | no                                      |
+---------------+-----------------------------------------+

Running Test Integrations

You can see how to run setups for different setups by executing the .sh files in test-integration

# Run mysql-env.sh example (you need to be in the project root directory)

./test-integration/mysql-env.sh

MySQL Caveat

If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example:

production:
  dialect: mysql
  datasource: root@/dbname?parseTime=true
  dir: migrations/mysql
  table: migrations

See here for more information.

Oracle (oci8)

Oracle Driver is oci8, it is not pure Go code and relies on Oracle Office Client (Instant Client), more detailed information is in the oci8 repo.

Install with Oracle support

To install the library and command line program, use the following:

go get -tags oracle -v github.com/rubenv/sql-migrate/...
development:
  dialect: oci8
  datasource: user/password@localhost:1521/sid
  dir: migrations/oracle
  table: migrations

Oracle (godror)

Oracle Driver is godror, it is not pure Go code and relies on Oracle Office Client (Instant Client), more detailed information is in the godror repository.

Install with Oracle support

To install the library and command line program, use the following:

  1. Install sql-migrate
go get -tags godror -v github.com/rubenv/sql-migrate/...
  1. Download Oracle Office Client(e.g. macos, click Instant Client if you are other system)
wget https://download.oracle.com/otn_software/mac/instantclient/193000/instantclient-basic-macos.x64-19.3.0.0.0dbru.zip
  1. Configure environment variables LD_LIBRARY_PATH
export LD_LIBRARY_PATH=your_oracle_office_path/instantclient_19_3
development:
  dialect: godror
  datasource: user/password@localhost:1521/sid
  dir: migrations/oracle
  table: migrations

As a library

Import sql-migrate into your application:

import "github.com/rubenv/sql-migrate"

Set up a source of migrations, this can be from memory, from a set of files, from bindata (more on that later), or from any library that implements http.FileSystem:

// Hardcoded strings in memory:
migrations := &migrate.MemoryMigrationSource{
    Migrations: []*migrate.Migration{
        &migrate.Migration{
            Id:   "123",
            Up:   []string{"CREATE TABLE people (id int)"},
            Down: []string{"DROP TABLE people"},
        },
    },
}

// OR: Read migrations from a folder:
migrations := &migrate.FileMigrationSource{
    Dir: "db/migrations",
}

// OR: Use migrations from a packr box
// Note: Packr is no longer supported, your best option these days is [embed](https://pkg.go.dev/embed)
migrations := &migrate.PackrMigrationSource{
    Box: packr.New("migrations", "./migrations"),
}

// OR: Use pkger which implements `http.FileSystem`
migrationSource := &migrate.HttpFileSystemMigrationSource{
    FileSystem: pkger.Dir("/db/migrations"),
}

// OR: Use migrations from bindata:
migrations := &migrate.AssetMigrationSource{
    Asset:    Asset,
    AssetDir: AssetDir,
    Dir:      "migrations",
}

// OR: Read migrations from a `http.FileSystem`
migrationSource := &migrate.HttpFileSystemMigrationSource{
    FileSystem: httpFS,
}

Then use the Exec function to upgrade your database:

db, err := sql.Open("sqlite3", filename)
if err != nil {
    // Handle errors!
}

n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
if err != nil {
    // Handle errors!
}
fmt.Printf("Applied %d migrations!\n", n)

Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails.

Check the GoDoc reference for the full documentation.

Writing migrations

Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.

-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
CREATE TABLE people (id int);


-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
DROP TABLE people;

You can put multiple statements in each block, as long as you end them with a semicolon (;).

You can alternatively set up a separator string that matches an entire line by setting sqlparse.LineSeparator. This can be used to imitate, for example, MS SQL Query Analyzer functionality where commands can be separated by a line with contents of GO. If sqlparse.LineSeparator is matched, it will not be included in the resulting migration scripts.

If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries:

-- +migrate Up
CREATE TABLE people (id int);

-- +migrate StatementBegin
CREATE OR REPLACE FUNCTION do_something()
returns void AS $$
DECLARE
  create_query text;
BEGIN
  -- Do something here
END;
$$
language plpgsql;
-- +migrate StatementEnd

-- +migrate Down
DROP FUNCTION do_something();
DROP TABLE people;

The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.

Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option:

-- +migrate Up notransaction
CREATE UNIQUE INDEX CONCURRENTLY people_unique_id_idx ON people (id);

-- +migrate Down
DROP INDEX people_unique_id_idx;

Embedding migrations with embed

If you like your Go applications self-contained (that is: a single binary): use embed to embed the migration files.

Just write your migration files as usual, as a set of SQL files in a folder.

Import the embed package into your application and point it to your migrations:

import "embed"

//go:embed migrations/*
var dbMigrations embed.FS

Use the EmbedFileSystemMigrationSource in your application to find the migrations:

migrations := migrate.EmbedFileSystemMigrationSource{
	FileSystem: dbMigrations,
	Root:       "migrations",
}

Other options such as packr or go-bindata are no longer recommended.

Embedding migrations with libraries that implement http.FileSystem

You can also embed migrations with any library that implements http.FileSystem, like vfsgen, parcello, or go-resources.

migrationSource := &migrate.HttpFileSystemMigrationSource{
    FileSystem: httpFS,
}

Extending

Adding a new migration source means implementing MigrationSource.

type MigrationSource interface {
    FindMigrations() ([]*Migration, error)
}

The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.

Usage with sqlx

This library is compatible with sqlx. When calling migrate just dereference the DB from your *sqlx.DB:

n, err := migrate.Exec(db.DB, "sqlite3", migrations, migrate.Up)
                    //   ^^^ <-- Here db is a *sqlx.DB, the db.DB field is the plain sql.DB
if err != nil {
    // Handle errors!
}

Questions or Feedback?

You can use Github Issues for feedback or questions.

License

This library is distributed under the MIT license.

sql-migrate's People

Contributors

akhorsi avatar bbkane avatar daved avatar dependabot[bot] avatar dixonwille avatar fanpei91 avatar freakingawesome avatar guettli avatar gunsluo avatar hellais avatar hrmsimon avatar insidieux avatar ironsmile avatar jufemaiz avatar lluchs avatar marema31 avatar mrxavier avatar muya avatar nicklaw5 avatar peterldowns avatar posener avatar pschyska avatar qhchen88 avatar rubenv avatar russellhaering avatar scop avatar shawnmilo avatar t-oki avatar yageek avatar zzet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql-migrate's Issues

Expand Env variable in datasource

I wished sql-migrate could expand an env variable in the datasource. It would be a single line change and should have to side-effects. I use it to keep the password out of the config file.

env.DataSource = os.ExpandEnv(env.DataSource)

schema.table in table setting

Would be nice, if I could put table with migrations in schema other than default (public in PostgreSQL):

production:
    dialect: postgres
    table: someschema.migrations
    ...

Note 1050: Table 'gorp_migrations' already exists

I have a program that attempts to apply any migrations it has not applied run after connecting to the database. However, when I run my program I get:

Note 1050: Table 'gorp_migrations' already exists

The migrations are stored using bindata. Here is the function definition:

// Open returns a DB reference for a data source.
func Open(dataSourceName string) (*DB, error) {
    db, err := sql.Open("mysql", dataSourceName)

    if err != nil {
        return nil, err
    }

    db.Ping()
    if err != nil {
        return nil, err
    }

    migrations := &migrate.AssetMigrationSource{
        Asset:    Asset,
        AssetDir: AssetDir,
        Dir:      "migrations",
    }

    n, err := migrate.Exec(db, "mysql", migrations, migrate.Up)
    if err != nil {
        return nil, err
    }
    fmt.Printf("Applied %d migrations!\n", n)

    return &DB{db}, nil
}

Any pointers on what I'm doing wrong? It looks like the call to CreateTablesIfNotExists() isn't working correctly; it appears to be returning an error and returning.

Missing up migrations applied during "down"

I had a situation with switching branches where the latest migration in the DB was "newer" than 2 migration files that had not yet been applied.

When running a "Down" migration, I noticed the "Up" code for both migrations was unintentionally applied and even worse they weren't recorded in the migrations table.

I tracked it down to the PlanMigration() function where it adds the results of ToCatchup(), regardless of the migration direction.

Obviously there a few issues with this. 1. They are up migrations when applying down. 2. The "catchup" migrations aren't considered in the check against the "max" parameter (which is 1 in my case). 3. The up migrations aren't recorded in the DB at all, leaving us in a broken state.

I'm not fluent in Go quite yet, otherwise I'd make a PR... but maybe the "catchup" logic should be ignored altogether for Down migrations.

Print the migration name when errors occur

It currently looks something like this when errors occur while running migrations (using mysql):

Migration failed: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[...]'

It would be helpful if the migration name would be printed as well.

Installation times out because of go-yaml

This isn't a sql-migrate issue per se, but it may be worth mentioning in the documentation.

I went to install sql-migrate per in the instructions:

$ go get github.com/rubenv/sql-migrate/...

but got the following errors:

# cd .; git clone https://gopkg.in/gorp.v1 /tmp/gopath-godep/src/gopkg.in/gorp.v1
Cloning into '/tmp/gopath-godep/src/gopkg.in/gorp.v1'...
fatal: unable to access 'https://gopkg.in/gorp.v1/': Failed to connect to gopkg.in port 443: Operation timed out
package gopkg.in/gorp.v1: exit status 128
# cd .; git clone https://gopkg.in/yaml.v1 /tmp/gopath-godep/src/gopkg.in/yaml.v1
Cloning into '/tmp/gopath-godep/src/gopkg.in/yaml.v1'...
fatal: unable to access 'https://gopkg.in/yaml.v1/': Failed to connect to gopkg.in port 443: Operation timed out 
package gopkg.in/yaml.v1: exit status 128

I was able to resolve it on my machine by doing:

$ git clone --branch v1 https://github.com/go-yaml/yaml $GOPATH/src/gopkg.in/yaml.v1
$ go get github.com/rubenv/sql-migrate/...

Now it installed. However, that means that gorp all of a sudden decided to stop timing out. So some people may need to grab gorp manually git clone --branch v1 https://github.com/go-gorp/gorp $GOPATH/src/gopkg.in/gorp.v1. So reproducing this error might be difficult, depending on why it's timing out.

bad connection handling

Hi

I have a big table. I manual execute ALTER TABLE <table> ADD need speed around 5 mins. When I executealter table via sql-migrate then suffer Migration failed: driver: bad connection handling 20170602144919-test.sql its only speed 41 s. I find out the process still running through mysql cli.

this is my mysql timeout settings:

+-----------------------------+----------+
| Variable_name               | Value    |
|-----------------------------+----------|
| connect_timeout             | 10       |
| delayed_insert_timeout      | 300      |
| innodb_flush_log_at_timeout | 1        |
| innodb_lock_wait_timeout    | 50       |
| innodb_rollback_on_timeout  | OFF      |
| interactive_timeout         | 28800    |
| lock_wait_timeout           | 31536000 |
| net_read_timeout            | 30       |
| net_write_timeout           | 60       |
| rpl_stop_slave_timeout      | 31536000 |
| slave_net_timeout           | 3600     |
| wait_timeout                | 28800    |
+-----------------------------+----------+

How can I fix ๏ฟฝthe issue?

support transactions for migrations

Hey,

I'm currently migrating my migrations from theory/sqitch to sql-migrate, mostly because of the binary embedding support.

I ran into problems because all my migrations look like this:

BEGIN;

SET client_min_messages = 'warning';

CREATE TYPE xxx 
CREATE TABLE public.xxx ();

COMMIT;

Allow env variables in config file

Allow for using env variables inside the config file for situations where you don't want the datasource revealed (eg. in production for security purposes).

Wrong migration order when using bindata

// Code generated by go-bindata.
// sources:
// migrations/20170806175335-staff_userid_unique.sql
// migrations/20171305141246-create_noticeboard.sql

create_noticeboard migration file was created before staff_userid_unique migration file but when generating bindata staff_userid_unique is coming before create_noticeboard because of the time-stamp. how to I fix this?

sql-migrate status fails if old migration file not found

  1. sql-migrate up
  2. remove sql-file
  3. sql-migrate status
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x10 pc=0x40b6be]

goroutine 1 [running]:
panic(0x8b2dc0, 0xc820010140)
        /usr/local/go/src/runtime/panic.go:481 +0x3e6
main.(*StatusCommand).Run(0xe65b08, 0xc82000a590, 0x0, 0x0, 0x9e0490)
        /home/d.telyukh/go/src/github.com/rubenv/sql-migrate/sql-migrate/command_status.go:86 +0x8be
github.com/mitchellh/cli.(*CLI).Run(0xc8203fa000, 0xc82001c450, 0x0, 0x0)
        /home/d.telyukh/go/src/github.com/mitchellh/cli/cli.go:154 +0x56e
main.realMain(0xc820000180)
        /home/d.telyukh/go/src/github.com/rubenv/sql-migrate/sql-migrate/main.go:39 +0x39e
main.main()
        /home/d.telyukh/go/src/github.com/rubenv/sql-migrate/sql-migrate/main.go:11 +0x18

undefined: os.Executable

Hi, I'm installing sql-migrate via go get and am running into an error:

โฏ go version
go version go1.7.6 linux/amd64
โฏ go get github.com/rubenv/sql-migrate/...
# github.com/posener/complete/cmd/install
go/src/github.com/posener/complete/cmd/install/install.go:75: undefined: os.Executable
โฏ echo $?
2

This just started happening recently (past day or two).

Made a related issue here: posener/complete#43 (feel free to close this one if you want)

support for migrations wrapped in transactions [postgresql]

Hey,

I'm currently migrating my migrations from theory/sqitch to sql-migrate, mostly because I want to ship the migrations /w my binary together.

I ran into problems because all my migrations look like this:

BEGIN;

SET client_min_messages = 'warning';

CREATE TYPE xxx AS ENUM ('a','b');
CREATE TABLE public.xxx ();

COMMIT;
Migration failed: pq: unexpected transaction status idle

Is this any issue with sql-migrate or the postgresql library in use? If you could point me into the right direction I could start working on a pull request so migrations are properly supported.

Right now I have to comment out the BEGIN; โ€ฆ COMMIT; statements, then everything starts working.

thanks!

Behavior when unknown migration

sql-migrate behaves badly when the database has migrations "unknown" to the current run of sql-migrate. This is most likely to happen when a diverged branch applied its migrations to a database while another branch also applied its own migrations to a database.

For example:

  1. branch A: 1, 2, 3
  2. branch B: 1, 2, 4
  3. branch B applies against DB. State is now "1, 2, 4"
  4. branch A tries to apply against DB. Migration "4" is unknown.

See existing issue: #37

How should sql-migrate behave when asked to perform a migration against a database that has unknown migrations? I can think of two reasonable behaviors:

  1. Complain that unknown migrations exist. Require the user to pass a -f force flag to complete the migrations. I'm not sure how the library should change, but I could work up some suggestions.
  2. Do the best we can. When migrating up, ignore the unknown migrations. When migrating down, fail upon encountering an unknown migration.

(For option 2, we could also allow migrating down by adding a migrate_down column to the migrations table and storing the SQL at migrate up time. That doesn't help any existing installations, but would be a way forward.)

Which behavior seems best? I'm in favor of 2, but see the merits of 1.

DELIMETER breaks migrations?

Steps to reproduce:

-- +migrate StatementBegin
DELIMITER //
-- +migrate StatementEnd
-- +migrate StatementBegin
CREATE PROCEDURE add_system_role
  (IN name VARCHAR(20), IN display_name VARCHAR(20), IN priority INT UNSIGNED)
  BEGIN
    INSERT INTO `roles` VALUES (NULL, name, display_name, NULL, priority, DEFAULT, DEFAULT);
  END //
-- +migrate StatementEnd
-- +migrate StatementBegin
DELIMITER ;
-- +migrate StatementEnd

Expected: Procedure is created.

Actual: Failed to migrate the database:

Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //\n-- +migrate StatementEnd' at line 2 handling 1_init.sql

Version: 6f47575

We are using jteeuwen/go-bindata.

Panic when using dialect: mysql

With dialect: mysql I get the following when running sql-migrate up:

panic: gorp - undefined MySQLDialect.Engine, MySQLDialect.Encoding. Check that your MySQLDialect was correctly initialized when declared.

Looking at the source, the engine and encoding need to be set somehow. Am I missing some option?

Could not create trigger on MySQL

`
-- +migrate Up

CREATE TABLE device_groups ...
CREATE TABLE devices ...

-- +migrate StatementBegin
delimiter |

CREATE TRIGGER update_devicegroup_devices_on_insert AFTER INSERT ON devices
FOR EACH ROW
BEGIN
UPDATE device_groups SET devices = (SELECT COUNT(*) FROM devices WHERE device_group_id = NEW.device_group_id) WHERE id = NEW.device_group_id;
END
|

delimiter ;
-- +migrate StatementEnd

-- +migrate Down
DROP TABLE IF EXISTS devices CASCADE;
DROP TABLE IF EXISTS device_groups CASCADE;
`

Using bindata

Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delimiter |

CREATE TRIGGER update_devicegroup_devices_on_insert AFTER INSERT ON' at line 2 handling 003_reinit.sql

`new command` used to month and day exchanged.

Hi.

Please tell me whether it is a bug or a specification.

I tried.

$ sql-migrate new -config=db/dbconfig.yml create_posts

output:

-rw-r--r--  1 user  staff  34  3 15 02:48 20171503024850-create_posts.sql

expect:

-rw-r--r--  1 user  staff  34  3 15 02:48 20170315024850-create_posts.sql

Why are the month and the day exchanged?

Naming migrations

Hello,

The naming of the migrations with a timestamp creates usability issues. The Django model us prefixing them with a serially incremented 4 digit number makes figuring out which migration to edit very easy. Compared to figuring out which of the 5-10 migrations I've created recently by staring at very similar dates with no easy to distinguish characteristics. Now I rename the migrations on my own but it would be a nice boon to have this built-in.

0001-initial.sql 
0002-users.sql 
0003-events.sql 
0004-posts.sql

Instead of:

20170711231210-init.sql
20170711231622-users.sql
20170711231630-events.sql
20170711231637-posts.sql

Thanks

support to verify migrations

Basically an open discussion:
What do you think about adding support to verify migrations?

theory/sqitch supports this - just a simple SQL script you write yourself to check, if a migration was properly applied.

This is especially useful if you depend on custom types, extensions, or refactored your migrations to check that everything is still fine.

Also, if you happen to write non atomic migrations which fail part way through, you could easily verify those errors.

Basically tests for your current schema, and not a table which lists the current (expected) state in terms of applied migrations.

I'm thinking something along the following lines:

-- +migrate Up
CREATE TABLE public.examples (
  id          SERIAL         PRIMARY KEY,
);

-- +migrate Down
DROP TABLE public.examples;

-- +migrate Verify
BEGIN;
SELECT id FROM public.examples WHERE FALSE;
ROLLBACK;

Besides sql-migrate status you could also run sql-migrate verify to check every verification step.

What do you think?

Running status command after MemoryMigrationSource fails

The failure is:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x10 pc=0x40b8b7]

goroutine 1 [running]:
panic(0x8b3bc0, 0xc820010160)
    /usr/local/go/src/runtime/panic.go:464 +0x3e6
main.(*StatusCommand).Run(0xe64ee8, 0xc82000a5d0, 0x0, 0x0, 0x9e0718)
    /home/tim/dev/go/src/github.com/rubenv/sql-migrate/sql-migrate/command_status.go:86 +0xb87
github.com/mitchellh/cli.(*CLI).Run(0xc8203ee000, 0xc820012510, 0x0, 0x0)
    /home/tim/dev/go/src/github.com/mitchellh/cli/cli.go:153 +0x56e
main.realMain(0xc820000180)
    /home/tim/dev/go/src/github.com/rubenv/sql-migrate/sql-migrate/main.go:39 +0x39e
main.main()
    /home/tim/dev/go/src/github.com/rubenv/sql-migrate/sql-migrate/main.go:11 +0x18

Perhaps since those records exist in the database (even though they doen't in the file system) we should just consider then as Migrated = true and add a new row?

syntax error at or near ";" with empty migration

a completely empty file fails for me in sql-migrate down:

Migration failed: pq: syntax error at or near ";" handling 20161007095606_create_tunnel_keys.sql

it would be great if there was a verbose flag that prints what sql-migrate is trying to execute

cannot cross compile with sqlite support

First, thanks for sql-migrate. It's a great tool. We have been using it so far with success inside docker containers, but because sqlite can't cross compile, we've had to write custom mains (using sql-migrate in library form without sqlite included). Is there possibly some strategy (besides just forking and commenting out sqlite) that could be used to allow cross compiles of the main sql-migrate program when we don't need sqlite support. I'm compiling from mac. See also mattn/go-sqlite3#384

Example of failed compile:

GOOS=linux GOARCH=amd64 go build -o casey-test sql-migrate/*.go
# github.com/mattn/go-sqlite3
../../mattn/go-sqlite3/sqlite3_go18.go:18: undefined: SQLiteConn
../../mattn/go-sqlite3/sqlite3_go18.go:26: undefined: SQLiteConn
../../mattn/go-sqlite3/sqlite3_go18.go:27: undefined: namedValue
../../mattn/go-sqlite3/sqlite3_go18.go:29: undefined: namedValue
../../mattn/go-sqlite3/sqlite3_go18.go:35: undefined: SQLiteConn
../../mattn/go-sqlite3/sqlite3_go18.go:36: undefined: namedValue
../../mattn/go-sqlite3/sqlite3_go18.go:44: undefined: SQLiteConn
../../mattn/go-sqlite3/sqlite3_go18.go:49: undefined: SQLiteConn
../../mattn/go-sqlite3/sqlite3_go18.go:54: undefined: SQLiteStmt
../../mattn/go-sqlite3/sqlite3_go18.go:63: undefined: SQLiteStmt
../../mattn/go-sqlite3/sqlite3_go18.go:36: too many errors

We don't actually use sqlite and if you comment out this line below, the cross compile works.

diff --git a/sql-migrate/config.go b/sql-migrate/config.go
index 6790606..76af76d 100644
--- a/sql-migrate/config.go
+++ b/sql-migrate/config.go
@@ -14,7 +14,7 @@ import (

        _ "github.com/go-sql-driver/mysql"
        _ "github.com/lib/pq"
-       _ "github.com/mattn/go-sqlite3"
+//     _ "github.com/mattn/go-sqlite3"
 )

 var dialects = map[string]gorp.Dialect{

Difference from goose

Hi, looking at the surface, it's a nice lib, but this seems to be very similar to goose.
What's the main difference between the two?

Print the migration name when parsing error occurs

In the same spirit than "Print the migration name when errors occur #3"

It would be great to have the migration file name displayed when there's an open/parsing error.
The end-user would then be able to directly go to the invalid file.

Use case : users that already kept their migration history, and that want to migrate it to sql-migrate formalism

Weird Behavior with Migrations From Multiple Directories

Scenario: I have one directory with a schema migration. I have a second directory which contains a migration for setting up integration test data.

What happens: The schema migration is run but the test data migration is not. It just ignores it. If I truncate the migrations table between the two migration runs the second will work. It seems like the tool is ignoring the name of migration and just assuming that because 1 migration has been run there must be nothing to do.

AFAIK this bit of code is at fault:

migrate.go:407

	var index = -1
	if current != "" {
		for index < len(migrations)-1 {
			index++
			if migrations[index].Id == current {
				break
			}
		}
	}

It will go though the existing migrations looking for the start point but never find it meaning nothing is applied. I think to solve this in general it would need to be a bit smarter and work based on the list of known migrations. So for example down limited to 2 wouldn't just take the two off the top, it would down the last two of the discovered migrations (even if there are newer ones in the db). It might not even be worthwhile since I guess very few people would want to have multiple directories of migrations.

Either way it could be worth noting in the README there is limitation here.

SSL connections with client certs

Currently, I haven't found a way to connect to a MySQL server that requires client cert-based authentication. Is there a way to do this ? If not, I'd be happy to contribute a PR after discussing how we want this to be done.

Ability to extend the sql-migrate#Migration

Currently the https://godoc.org/github.com/rubenv/sql-migrate#Migration has the Up, Down []string which I guess will be read and run by migrate.Exec.
It would be nice if the Migration is an interface then we can do some update that need more than SQL script. For example:

type ComplexPwdGenMigration {
	DB
}
func (m *ComplexMigration) Up() []string {
// then something like:
// people := m.DB.GetAll()
// for each person in fo person.password = veryComplexPasswordGenerate(person.id) + m.DB.Save(person)
	return nil
}
//  and implement the rest method of 
migrations := &MixMigrationSource{
	Migrations: []Migrator{
		&migrate.Migration{
			Id:   "123",
			Up:   []string{"CREATE TABLE people (id int, password var)"},
			Down: []string{"DROP TABLE people"},
		},
		&ComplexPwdGenMigration,
		&AnotherComplexProcessMigration,
		&migrate.Migration{
			Id:   "123",
			Up:   []string{"...."},
			Down: []string{"....."},
		},
	},
}

release tags pls

Tagging stable version makes package management easier. Could you please add that to your project?

Thanks!

Defining functions ?

I have the following migrate file, I can't seem to get CREATE FUNCTION to work with multi-statement functions at all, even when using -- +migrate StatementBegin and -- +migrate StatementEnd

-- +migrate Up
-- UUID functions

-- +migrate StatementBegin
DELIMITER //
CREATE FUNCTION uuid_to_binary($Data VARCHAR(36)) RETURNS binary(16)
DETERMINISTIC
NO SQL
BEGIN
    DECLARE $Result BINARY(16) DEFAULT NULL;
    IF $Data IS NOT NULL THEN
        SET $Data = REPLACE($Data,'-','');
        SET $Result =
            CONCAT( UNHEX(SUBSTRING($Data,7,2)), UNHEX(SUBSTRING($Data,5,2)),
                    UNHEX(SUBSTRING($Data,3,2)), UNHEX(SUBSTRING($Data,1,2)),
                    UNHEX(SUBSTRING($Data,11,2)),UNHEX(SUBSTRING($Data,9,2)),
                    UNHEX(SUBSTRING($Data,15,2)),UNHEX(SUBSTRING($Data,13,2)),
                    UNHEX(SUBSTRING($Data,17,16)));
    END IF;
    RETURN $Result;
END//
-- +migrate StatementEnd

-- +migrate StatementBegin
DELIMITER //
CREATE FUNCTION binary_to_uuid($Data BINARY(16)) RETURNS char(36) CHARSET utf8
DETERMINISTIC
NO SQL
BEGIN
    DECLARE $Result CHAR(36) DEFAULT NULL;
    IF $Data IS NOT NULL THEN
        SET $Result =
            CONCAT(
                HEX(SUBSTRING($Data,4,1)), HEX(SUBSTRING($Data,3,1)),
                HEX(SUBSTRING($Data,2,1)), HEX(SUBSTRING($Data,1,1)), '-',
                HEX(SUBSTRING($Data,6,1)), HEX(SUBSTRING($Data,5,1)), '-',
                HEX(SUBSTRING($Data,8,1)), HEX(SUBSTRING($Data,7,1)), '-',
                HEX(SUBSTRING($Data,9,2)), '-', HEX(SUBSTRING($Data,11,6)));
    END IF;
    RETURN $Result;
END//
-- +migrate StatementEnd

CREATE FUNCTION ordered_uuid() RETURNS binary(16)
NO SQL
RETURN uuid_to_binary(UUID());


-- +migrate Down
DROP FUNCTION ordered_uuid;
DROP FUNCTION uuid_to_binary;
DROP FUNCTION binary_to_uuid;

Add a 'new' command

Just wondered if others would find this useful...

I'd like to see a new command so when using the command line I can do

sql-migrate new udf-ParseDate

and it create a templated file in the ./migration folder (or what ever this is set in the config) named <current epoch>-<name>.sql, eg: 1443610373-udf-ParseDate.sql

The templating would have the -- +migrate Up and -- +migrate Down defined.

An enhancement to this would be to take the name and scan for the last migration matching the name and populate the migrate Down section of the template from the last migrate Up script.

Comments appreciated, if people like this I'll look at PR.

Add lock for concurrent migrations running from several nodes

Hi,

could you please add a lock for migrations? Our API runs migrations right before initialization and it's running on more than one node, so we have conflicts because the migrations try to apply at the same time from different nodes. It's ok for CREATE IF NOT EXIST or ALTER but not so good for datafixes like simple INSERT new value.

Please consider to acquire access using GET LOCK or some similar approach, thanks.

http://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock

Allow overriding the migrations table name

It would be pretty cool if the table name could be overridden, either in the CLI interface or by passing in some form of settings struct within the library interface.

sql-migrate up --migration-table my_migration_table

Please use gopkg.in/gorp.v1

I see that you are importing gorp from github.com/go-gorp/gorp.
This is the unstable "bleeding edge" version. We are working on breaking changes, it is likely that your program will fail to compile unless you update your code to reflect the changes.

Please use gopkg.in/gorp.v1 as import path, it provides the last stable gorp v1 release and will not break (unless a breaking change is required to fix a security issue).

When the changes to the master branch are complete, v2 will be released. See go-gorp/gorp#270 for more information.
Thanks.

How to write 001.sql for existing environments?

Hi all,

Is there a best practice for the post-fit of this system for existing environments?
For example, initial state will be taken from our prod env.
So the 001 script will recreate the entire schema for qa, stage etc, but ideally it would not be run on prod (as the schema would already be correct).

I suspect the answer would be in a different script for each env, but this means maintaining a script for every env, I would prefer to have a single set for all envs.

Any suggestions would be welcome.

Scripts not executing in order

Anyone have any idea why the scripts appear to be executing out of order?
After running an "up" which fails and then
sql-migrate status -env="schema" I get the following very odd status :

+---------------------------------+-------------------------------+
| MIGRATION | APPLIED |
+---------------------------------+-------------------------------+
| 000000001-bs.sql | no |
| 000000002-cdata.sql | no |
| 000000003-cdataau.sql | no |
| 000000004-colorblindlist.sql | no |
| 000000005-commission.sql | no |
| 000000006-commons.sql | 2017-09-16 08:30:18 +0000 UTC |
| 000000007-eligible_items.sql | 2017-09-16 08:30:18 +0000 UTC |
| 000000008-flags_view.sql | 2017-09-16 08:30:18 +0000 UTC |
| 000000009-orders.sql | no |
| 000000010-items.sql | no |
| 000000011-attention.sql | no |
| 000000012-itemexpress.sql | no |

I am generating the migration scripts automatically, but I don't think there is anything out of order in my naming schema.
Anyone have an idea?

Comments should be ignored for detecting commands boundaries

I have a SQL file that ends with a commented out line. SQL migration fails with the following error:

I0404 16:17:30.659541   22029 error.go:293] Error:INTERNAL ERROR: The last statement must be ended by a semicolon or '-- +migrate StatementEnd' marker.

Patch won't apply

Hi.

We've got a weird situation. The following happens for every developer in my team, except for me. The patches work fine on my installation.

This patch or anything really, doesn't really matter, won't apply, but the down statement will execute (even though the file isn't in the gorp_migrations table [1])

Background info

  • Postgres backend
  • Latest sql-migrate release
  • The sql-migrate binary is distributed as part of the repository all migrations are in. People simply re-use the bundled (stand-alone) binary
  • We have multiple migrations directories.
  • Everything works fine when creating a new database, all patches work as intended.

The patch

It can be anything really, doesn't really matter. Single lines, no lines, without statement start/end tags, doesn't make a difference.

-- +migrate Up
-- +migrate StatementBegin

CREATE TABLE foo (
  bar varchar(50)
);

-- +migrate StatementEnd

-- +migrate Down
-- +migrate StatementBegin

DROP TABLE foo;

-- +migrate StatementEnd

The sql-migrate up dtruss output

Command: sudo dtruss ../sql-migrate up

Applied 0 migrations
SYSCALL(args)        = return
thread_selfid(0x0, 0x0, 0x0)         = 513675 0
open(".\0", 0x0, 0x1)        = 3 0
fstat64(0x3, 0x7FFF5FBFE590, 0x1)        = 0 0
fcntl(0x3, 0x32, 0x7FFF5FBFE830)         = 0 0
close(0x3)       = 0 0
stat64("/Users/x/y/DBMigrate/content\0", 0x7FFF5FBFE500, 0x7FFF5FBFE830)         = 0 0
[..]
read(0x5, "N\0", 0x1000)         = 141 0
open("migrations/\0", 0x1000000, 0x0)        = 7 0
getdirentries64(0x7, 0xC4201E8000, 0x1000)       = 200 0
getdirentries64(0x7, 0xC4201E8000, 0x1000)       = 0 0
lstat64("migrations//0_initial-content.sql\0", 0xC420068788, 0x1000)         = 0 0
[..]
lstat64("migrations//2_test.sql\0", 0xC420068928, 0x1000)        = 0 0
[..]
open("migrations/2_test.sql\0", 0x1000000, 0x0)      = 10 0
lseek(0xA, 0x0, 0x0)         = 0 0
read(0xA, "-- +migrate Up\n\nCREATE TABLE foo (\n\t  bar varchar(50)\n);\n\n-- +migrate Down\n\nDROP TABLE foo;\n\n\0", 0x1000)      = 93 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
read(0xA, "\0", 0xFA3)       = 0 0
lseek(0xA, 0x0, 0x0)         = 0 0
read(0xA, "-- +migrate Up\n\nCREATE TABLE foo (\n\t  bar varchar(50)\n);\n\n-- +migrate Down\n\nDROP TABLE foo;\n\n\0", 0x1000)      = 93 0
read(0xA, "\0", 0xFA3)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x5, "Q\0", 0x23)      = 35 0
read(0x5, "\0", 0x1000)      = -1 Err#35
kevent(0x6, 0x0, 0x0)        = 1 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
kevent(0x6, 0x0, 0x0)        = 1 0
read(0x5, "T\0", 0x1000)         = 475 0
write(0x1, "Applied 0 migrations\0", 0x14)       = 20 0
write(0x1, "\n\0", 0x1)      = 1 0

The sql-migrate down dtruss output

Migration failed: pq: table "foo" does not exist handling 2_test.sql
SYSCALL(args)        = return
thread_selfid(0x0, 0x0, 0x0)         = 514159 0
open(".\0", 0x0, 0x1)        = 3 0
fstat64(0x3, 0x7FFF5FBFE580, 0x1)        = 0 0
fcntl(0x3, 0x32, 0x7FFF5FBFE820)         = 0 0
close(0x3)       = 0 0
stat64("/Users/x/y/DBMigrate/content\0", 0x7FFF5FBFE4F0, 0x7FFF5FBFE820)         = 0 0
[..]
open("dbconfig.yml\0", 0x1000000, 0x0)       = 3 0
fstat64(0x3, 0xC42007C2A8, 0x0)      = 0 0
read(0x3, "development:\n    dialect: postgres\n    datasource: host=localhost user=postgres dbname=content sslmode=disable connect_timeout=10\n    dir: migrations/\n    table: gorp_migrations\n\nstaging:\n    dialect: pos", 0x45F)      = 607 0
read(0x3, "\0", 0x200)       = 0 0
close(0x3)       = 0 0
[..]
open("migrations/\0", 0x1000000, 0x0)        = 7 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
getdirentries64(0x7, 0xC420206000, 0x1000)       = 200 0
getdirentries64(0x7, 0xC420206000, 0x1000)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
lstat64("migrations//0_initial-content.sql\0", 0xC42007C6B8, 0x1000)         = 0 0
[..]
lstat64("migrations//2_test.sql\0", 0xC42007C928, 0x1000)        = 0 0
[..]
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
read(0x9, "\0", 0xF52)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
open("migrations/2_test.sql\0", 0x1000000, 0x0)      = 10 0
lseek(0xA, 0x0, 0x0)         = 0 0
read(0xA, "-- +migrate Up\n\nCREATE TABLE foo (\n\t  bar varchar(50)\n);\n\n-- +migrate Down\n\nDROP TABLE foo;\n\n\0", 0x1000)      = 93 0
read(0xA, "\0", 0xFA3)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
lseek(0xA, 0x0, 0x0)         = 0 0
read(0xA, "-- +migrate Up\n\nCREATE TABLE foo (\n\t  bar varchar(50)\n);\n\n-- +migrate Down\n\nDROP TABLE foo;\n\n\0", 0x1000)      = 93 0
read(0xA, "\0", 0xFA3)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x5, "Q\0", 0x23)      = 35 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
read(0x5, "\0", 0x1000)      = -1 Err#35
kevent(0x6, 0x0, 0x0)        = 1 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
kevent(0x6, 0x0, 0x0)        = 1 0
read(0x5, "T\0", 0x1000)         = 475 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x5, "Q\0", 0xB)       = 11 0
read(0x5, "\0", 0x1000)      = -1 Err#35
kevent(0x6, 0x0, 0x0)        = 1 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
kevent(0x6, 0x0, 0x0)        = 1 0
read(0x5, "C\0", 0x1000)         = 17 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x5, "Q\0", 0x28)      = 40 0
read(0x5, "\0", 0x1000)      = -1 Err#35
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
kevent(0x6, 0x0, 0x0)        = 1 0
kevent(0x6, 0x0, 0x0)        = 1 0
read(0x5, "E\0", 0x1000)         = 97 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x5, "Q\0", 0xE)       = 14 0
read(0x5, "\0", 0x1000)      = -1 Err#35
kevent(0x6, 0x0, 0x0)        = 1 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
kevent(0x6, 0x0, 0x0)        = 1 0
read(0x5, "C\0", 0x1000)         = 20 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
write(0x1, "Migration failed: pq: table \"foo\" does not exist handling 2_test.sql\0", 0x44)         = 68 0
write(0x1, "\n\0", 0x1)      = 1 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = 0 0
select(0x0, 0x0, 0x0, 0x0, 0x700000080DC8)       = -1 Err#4

Questions:

  1. Why is a down statement issued when the file isn't applied in the first place.
  2. Why doesn't the up statement apply..

Build optimizations?

Of all the dependencies in my project, sql-migrate is the slowest to build on a clean run. I think this was because of the references to all the database drivers included.

Is there a way to speed this up?

sort problem

+----------------+---------+
| MIGRATION | APPLIED |
+----------------+---------+
| 2015_06_02.sql | no |
| 2015_05_21.sql | no |
| 2015_06_03.sql | no |
+----------------+---------+

Build Error

I'm using a 512MB DigitalOcean droplet for development purposes.

Currently, building fails with the following message:
go build github.com/denisenkom/go-mssqldb: signal killed

More on this issue here: https://code.google.com/p/go/issues/detail?id=6251

After deleting sql-migrate/mssql.go, it builds successfully.

Standalone tool and tests broken on MacOS/golang1.8

Trying to run the standalone client on MacOS does this:

sql-migrate
[1]    88703 killed     sql-migrate

Also running the tests in rubenv/sql-migrate fail

src/github.com/rubenv/sql-migrate: go test ./...

signal: killed
FAIL    github.com/rubenv/sql-migrate    0.008s
signal: killed
FAIL    github.com/rubenv/sql-migrate/sql-migrate    0.007s
ok      github.com/rubenv/sql-migrate/sqlparse    0.007s

The culprit should be one of the last couple of commits, since I've been using this tool without issue for a while.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.