Coder Social home page Coder Social logo

jpoehls / dotnetmigrations Goto Github PK

View Code? Open in Web Editor NEW
36.0 36.0 10.0 2.84 MB

DotNetMigrations is a database migration framework that assists in managing and versioning database changes. It was originally designed as a straight port of the rails functionality located in the Ruby on Rails framework; however, has recently moved into a more open framework thanks to the power of the Managed Extensibility Framework.

License: Other

C# 97.43% Batchfile 0.35% PowerShell 2.22%

dotnetmigrations's People

Contributors

darrencauthon avatar jameseggers1 avatar jpoehls avatar kieranbenton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dotnetmigrations's Issues

Possible Custom Console

From James Eggers:

Was thinking about the changes that we have planned on wondering if having an inline and custom console mode of DNM may be beneficial. This would allow automation scripts while providing an easy way to work on the migrations themselves. This may be better suited for a further future release when more helper commands are available though.

Migrations don't work past the number 9 when using an incremental integer

I've been using indexing my migration scripts with an incremental integer. After migrating past my tenth script, the system stopped working because dotnetmigrations thought I was only on version 9.

The problem is the select statement here:

const string command = "SELECT MAX([version]) FROM [schema_migrations]";
https://github.com/jpoehls/dotnetmigrations/blob/master/src/DotNetMigrations.Core/BaseClasses/DatabaseCommandBase.cs#L73

Since the version field is nvarchar(14), "9" is considered greater than 10, 11, etc.

The fast fix is to manually change the property in the database to integer, but that doesn't help people who use this library. There are a number of ways to solve this issue, but since this logic is rooted deep in a base class I'm not sure which method should be used.

Timeouts

Hi, I'm still working with the 0.5 version and had a problem with timeouts on scripts that took a while to complete. To fix that I added CommandTimeout = 0 where the DbCommands where being created.
Could you add this to the latest version of DotNetMigrations?

Version command should list all scripts not applied to database

Currently the version command simply lists the latest migration script version # and the current schema version #. This was useful back when the version numbers were sequential, however now that the version numbers are UTC time stamps it isn't very useful to know the specific version of schema vs database. All that tells you is that the versions are different, it doesn't tell you whether just 1 migration is missing, or whether 100 are missing.

Proposed changes:

  • If the schema is up-to-date it should say so, and perhaps give just the latest version # that it is up-to-date to.
  • If not all migration scripts have been applied to the schema, then it should list the ones that are missing. If > 20? are missing, it should list the 20 newest that are missing and say "plus 100 more" to indicate more are missing as well. You should be able to run the command again with a switch to output all of them.
  • If there are migration scripts that have been applied to the database that aren't in your migrations folder, then the command should warn you that you seem to be missing migrations scripts and should list the ones that are missing with the same truncation rules as above.
  • If both conditions are true, i.e. you are missing migration scripts that have been applied AND you have scripts that haven't been applied to the database, then you should see both warning outputs.

New "test" command

Create a new test command that

  • begins a transaction
  • rolls back the database to 0 (if needed)
  • migrates to the latest version
  • then rolls back to 0 again
  • rolls back the transaction (so that the database state is the same as before the test command was run)
  • If no hash warnings (or errors) are thrown during the test then it is deemed successful, otherwise the test fails and reports the specific migration whose TearDown triggered the warning.

This command could cause logs to fill up very fast in a large or production database. A warning of such should be shown and confirmed before running this command.

Bulkloading binary data

In the 0.5 version I added support for bulkloading binary data. Could you implement this in the latest version?

In DAL.cs I added

internal static object GetBinaryParameter(string name, string stringvalue)
{
Byte[] bytevalue;
if (string.IsNullOrEmpty(stringvalue))
{
bytevalue = null;
}
else {
if (stringvalue.StartsWith("0x"))
{
stringvalue = stringvalue.Substring(2);
}
bytevalue = new Byte[stringvalue.Length / 2];
for (Int32 i = 0; i < stringvalue.Length / 2; i++)
{
Byte.TryParse(stringvalue.Substring(i * 2, 2), System.Globalization.NumberStyles.HexNumber, null, out bytevalue[i]);
}
}
DbParameter p = GetFactory().CreateParameter();
p.ParameterName = name;
p.Value = bytevalue;
return p;
}

And in BulkloadCmd.cs I replaced

                        //  add values to INSERT statement
                        foreach (string columnName in rowParams.Keys)
                        {
                            sb.Append("@" + columnName + ",");
                            cmd.Parameters.Add(DAL.GetParameter("@" + columnName, rowParams[columnName]));
                        }

with

                        //  add values to INSERT statement
                        foreach (string columnName in rowParams.Keys)
                        {
                            sb.Append("@" + columnName + ",");
                            if (rowParams[columnName].StartsWith("0x"))
                            {
                                cmd.Parameters.Add(DAL.GetBinaryParameter("@" + columnName, rowParams[columnName]));
                            }
                            else
                            {
                                cmd.Parameters.Add(DAL.GetParameter("@" + columnName, rowParams[columnName]));
                            }
                        }

Seed Data

DNM-Contrib? – New Seed Data function. Each seed data script should have a migration number that it gets applied at. When you migrate the database, the seed data scripts up to the current migration should be applied (if they weren’t already). This implies we need a new table in the database to track which seed data scripts have been ran. Also we need an OnMigrated() hook that fires after a migration is run so that our seed data runner can tie in and execute the required scripts. How do we handle rollbacks? Seed data really can’t be rolled back, does this matter?

New core "combine" command that will take a range of migrations and produce a single script that can be run separately (for DBAs)

This is important in large scale enterprise deployments. Although development teams can and should develop solely using migrations and making their own choices over whether to have local or shared development databases, it is vital that at the end of a sprint or as part of a release, a single packaged "upgrade" script can be provided to a DBA group to apply to staging and production systems.

It may be that eventually there is no requirement for this within a company, but certainly whilst migrating to more agile methods (including automatically run migrations) it is important to be able to build confidence with and interoperate with existing DBA requirements.

Change migration script token format

Change the ##DNM:VERSION## and ##DNM:PROVIDER## tokens to be /*DNM:VERSION*/ and /*DNM:PROVIDER*/ instead so that the tokens won't cause syntax errors when running the SQL scripts outside of DNM.

Seed data should support CSV files

  • CSV files must have headers that match the table's column names.
  • The CSV file name must either match the table name completely (ex. MyTable.csv) or must contains the table name in brackets somewhere in the file name (ex. demo data for [MyTable].csv).
  • If the headers include the Primary Key column then turn identity insert on.
  • The CSV file name must include the table name in some parsable way.
  • Turn off CHECK CONSTRAINT before inserts and turn it back on afterwords.

Support for other database objects (sprocs, functions)

Stored Procedures and Functions are not a good fit for inclusion in migration scripts. They are better off having their own script files. Consider adding support for a /functions and /storedprocs folders along side the /migrations folder. Either in the migrate command, or in a new command, execute all the CREATE scripts in those new folders.

  • Would be nice if we could detect which Stored Procs & Functions needed to be updated before running the scripts.
  • Scripts should be named with the object name so that we can handle dropping the object if necessary (ex. dbo.mySproc.sql)
  • Justification for keeping function & stored proc scripts out of the migration scripts: History. If each revision to a function and sproc is in a separate migration file you have no easy way to view the history of changes over time as you would with source code files using your VCS diff tools. Functions and stored procs are more closely related to source code routines than they are to table schema objects and this kind of diff / VCS functionality is important for them.
  • Handling rollbacks with non-versioned function & sproc scripts: would have to checkout specific revision of /functions and /storedprocs folders from source control for when migration script that you are rolling back to was created? This way the correct version of the objects would also be added. - *This is very hokey. We need a better way to version these CREATE scripts alongside the migration version without having to include them in migration setup/teardown scripts. *

Schema Hashing

Hash will include:

  • tables (catalog, schema, name)
  • table columns (name, default values, all data type information and optionally their order)
  • constraints (foreign key and primary key)
  • views (catalog, schema, name and definition)

Notes:

  • Add a [hash] column to the [schema_info] table to store a hash of the schema taken after the migration
  • On rollback, confirm that the schema hash matches the last migrations [hash] → if it doesn’t, throw a warning (or, optionally) throw an error and rollback the transaction. A non-match would indicate that either the Tear Down of the migration didn’t correctly return the schema to the previous state, or indicates that changes to the schema were made manually (and not by migration scripts).
  • On migrate up, confirm that the schema hash matches the last migrations [hash] before starting. A non-match would indicate the schema has changes since the last migration was applied. Throw a warning (or error).
  • It should be a config option whether schema mismatches throw warnings or errors (errors would also mean the migration transaction would be rolled back).
  • It should be a config option whether the order of columns in a table are included in the hash.
  • Evaluate the performance of calculating and comparing the hashes. Determine whether the perf hit is substantial enough to warrant including a switch to skip the hashing.

Questions:

  • What happens if you change the option whether to include column order in the hash after migrations have already been applied? This would mean the hashes already taken are all invalid.
  • Option 1: Regenerate the hashes by running a command similar to the ‘test’ command that rolls back the database and migrates it up inside of a transaction. This way we could regenerate all the hashes and then return the database to the original state when done. (This would have the same performance impacts as noted in the ‘test’ command.)
  • Option 2: Always store 2 hashes. One with column order and one without.
    How will we know when the column order hash option has been changed?

Resources:

SecurityException

For deployment I've created a setup project (MSI) that installs DotNetMigrations locally. On older (Windows XP) machines this worked fine, but on machines with Windows 7 the application fails to initialize due to security exception when it is loading dmn.logs from the app.config. I fixed it by editing the last method in src\DotNetMigrations.Core\Provider\ConfigurableTypeCatalog.cs like so

private static ConfigurableTypeSection GetSection(string sectionName)
{
    Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
    ConfigurableTypeSection section = config.GetSection(sectionName) as ConfigurableTypeSection;

    if (section == null)
    {
        throw new ConfigurationErrorsException(string.Format("The configuration section {0} could not be found.", sectionName));
    }

    return section;
}

Using the OpenEXECofiguration fixes the SecurityException. This workaround was found here: http://stackoverflow.com/questions/2725432/net-4-0-application-on-network-share-causes-securityexception

Specify migrations folder from command line

It would be useful when writing batch files to be able to provide the path to the migration files from the command line.

As it is, it always looks for the path "migrate" from the current working directory.

Move DNM to .NET 4.0

With the recent release of .Net 4, a possible build of the source coding using the .Net 4 framework will be coming out soon. There are some benefits of using .Net 4; however, know that not everyone has the ability to upgrade their systems to the .Net 4 runtime right away. Because of this, the next build will probably be cross compiled. There has not been discussions on if and when 3.5 will be dropped.

Update Argument Structure

The current argument structure is based on convention and can get confusing really fast when arguments are optional. We may be converting to using a more named-argument convention instead of placement convention using Mono.Options.

Discover connection strings in projects' web.config and/or app.config

If DNM is installed via NuGet (or otherwise installed in some predicable way alongside a solution) then we should be able to easily discover all of the connection strings stored in the projects' web.config and app.config files.

Pros:

  • User doesn't have to duplicate their connection strings in both their web/app config files AND in the DNM config file.
  • Feature can degrade nicely back to just using connection strings in the DNM config file if no other configs are found or if DNM isn't installed alongside a solution.

Cons:
?

Help Text Formatting

When the help text is displayed, words are broken and not wrapped and indented properly.

First-class Mono support

I believe DNM already has a decent level of Mono support, however I am now at a point where I hate booting up my Windows VM and need to be able to maintain DNM from OS X. So the whole thing needs to support Mono first class.

Here are a few things I know needs to happen for this:

  1. Convert build script from PowerShell into something that will run equally well on Windows and Mono.
  2. Figure out if/how we can publish updates to the NuGet package from Mono.
  3. Update the TeamCity CI build to use the new build scripts.
  4. Update all applicable unit tests to use an in-memory Sqlite database instead of SQL Server Compact. SQL Server Company specific tests can continue to use it, but it should be possible to run the test suite (minus those tests) under Mono easily. The full test suite should be able to run on the TeamCity server.

DNM library that can be embedded in your application to run migrations

The focus would be on making a library that could be referenced by an ASP.NET (or other) application. At the least, this library needs a public Migrate(string connectionString, string migrationsFolder) function that can be called to run migrations on the given database.

Use case would be a web application that auto-upgrades the database in the OnApplicationStart global.asax event.

This would greatly simplify deployments by removing the database update step. Just deploy the code and you're done.

This would be safer after the 'test' command has been created so that migration testing could be integrated into the CI tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.