Coder Social home page Coder Social logo

Migration? about red HOT 79 OPEN

fco avatar fco commented on August 28, 2024
Migration?

from red.

Comments (79)

moritz avatar moritz commented on August 28, 2024 3

In order to develop useful migrations, you first have to think about the scenario it's going to be used it.

I have some experience with the scenario of developing a SaaS application, so we develop and operate the application within the organization.

In that scenario, things tend to work like this:

  • You start from version 0
  • You create a NULLable column (version 1)
  • you deploy a new version of the software that can fill the NULLable column and backfill the old data
  • you make the column NOT NULL (version 2)
  • you deploy a new version that can rely on the column being NOT NULL

In this scheme, there's a software deployment associated with each migration, so there is no need for the ORM to support multi-step migrations. What would help is some sort of check that you don't accidentally deploy version 2 to an environment that currently contains database version 0.

In other scenarios, where you just give a piece of software to somebody else, and it needs to handle arbitrary number of schema migrations on the fly, the proposed approach sounds more sensible, but I don't have much experience with those situations.

from red.

jonathanstowe avatar jonathanstowe commented on August 28, 2024 2

As a complete tangent to most of the above I was thinking that maybe an .^assert-table which basically determines the structure of an existing table in the DB and calculates the changes required for it to be the same as that defined by the as-is model. Basically translating to a bunch of ALTER TABLE statements (or a CREATE TABLE if the table doesn't exist at all.) This would need all the stuff in the driver that #108 would need I guess.

from red.

daltrogama avatar daltrogama commented on August 28, 2024 2

I’ve been thinking about migration and I agree with something I’ve read somewhere that says a migrations should be split into 5 parts:

  • create new columns as nullable
  • change the code to handle the new column or if null the old one (or the opposite)
  • populate the new columns and unnulable those
  • change the code, remove the handle of the old columns
  • remove the old columns

So, the idea would be make Red Migration run each of this steps. This is what I thought:

model User { # please, do not do that!!!
   has Str $.nick is column;
   has Str $.plain-password is column;
   ...;
   method save-password(Str $new) {
      $.plain-password = $new;
      $.^save
   }
   method auth(Str $nick, Str $pass) {
      ::?CLASS.^all.grep: { .nick eq $nick AND .plain-password eq $pass }
   }
}

This is a website that stores it’s users password in plain text... so it would like to change it to hash it’s password...

model User { # please, do not do that!!!
   has Str  $.nick is column;
   has Str  $.hashed-password is column;
   has Bool $.expired is column;
   ...;
   method save-password(Str $new) {
       if %*RED-MIGRATION<0.1.2> <= BEFORE-START {
         $.plain-password = $new;
      } else {
         $.hashed-password = hash-pass $new;
         $.expired = True;
      }
      $.^save
   }
   method auth(Str $nick, Str $pass is copy) {
      $pass = hash-pass $pass if %*RED-MIGRATION<0.1.2> > CREATED-COLUMNS;
      my $user = ::?CLASS.^all.grep: { .nick eq $nick AND .plain-password eq $pass };
      throw “change password” if %*RED-MIGRATION<0.1.2> > CREATED-COLUMNS and &user.expired;
      $user
   }
}

And on the migration file:

User.^migration: {
   .expired = True
}

So Red would see that it should create 2 new columns (hashed-password and expired) and create it as nullable. Before that %*RED-MIGRATION<0.1.2> would contain BEFORE-START (assuming this is the version of this migration) after that, it would return CREATED-COLUMNS. After that it would populate the new columns with the new value (in this case True on expired column for every row) and %*RED-MIGRATION<0.1.2> will return POPULATED-COLUMNSnow it can delete the old columns and set DELETED-COLUMNS on the hash.

It step is controlled by a time stamp on the database showing where it should run the next step (and what step it is now, this is from where the value of the hash is gotten).

This time stamp can have a default value changed automatically by Red or manually changed by a Red cli.

I really want to know what you think... please leave a comment!

The idea of automating a two-phase migration for zero downtime sounds interesting but may bring too many responsibilities from application-specific business rules to the framework. In your example, you have a friendly use case, but don't know if you can ensure a reliable two-phase migration with enough abstraction.

To make worth it for the developer to use a orm's black-box feature to do such a sinsible task, it must be very simple and reliable... Enough to convince that implement two migrations and handle intermediate state for a zero downtime process really worth it through RED, safely.

If you designed RED to be used in a "vendor lock-in" fashion, i.e., database metadata includes RED-specific structures to handle version and extra state or metadata, I believe you can implement a very reliable feature for lazy-migration or background migration with zero downtime. But, if RED is designed to handle absolute generic databases (complex keys, fks, relations and columns metadata) so that all metadata available is the database schema itself, I strongly believe that you'll always have a corner case that bugs this kind of migration in a generic way.

Sounds interesting to me, as a developer, that RED handles this type of migration as a special feature, that may be called "lazy migration" or "background migration" (really running a background thread in the framework that does the batch migration) and be very specific and restrictive about the migrations. It's important that the intermediate phase for zero downtime migration is as fast as possible so that lazy developers don't stack many migrations in the intermediate phase without finishing it and lead the database to chaos. If the framework claims to handle this, the framework must handle and be responsible for all the consequences of bad usage of the feature.

from red.

FCO avatar FCO commented on August 28, 2024 1

I just started playing with migrations... now (since a185e2f) it's possible:

$ perl6 -Ilib -e '
use Red "experimental migrations";
model Bla {
   has Int $.a is column;
}

model Ble {
   has Int $.b is column;
}
Ble.^migration: {
   .b = .a * 3
}
Ble.^migrate: :from(Bla);
Ble.^dump-migrations
'
b => (bla.a)::int * 3

please, pay attention on use Red "experimental migrations";...

from red.

FCO avatar FCO commented on August 28, 2024 1

sorry, wrong button again...

from red.

FCO avatar FCO commented on August 28, 2024 1

A friend has shown me that I’m revealing to much of the implementation details with the if. So now I’m thinking of something like:

method save-password($new) {
   handle-migration 0.1.2,
      new-columns => {
         .hashed-password = hash-password $new
      },
      old-columns => {
         .plain-password = $new
      }
   ;
}

And

method auth($login, $pass is copy) {
   handle-migration 0.1.2,
      new-columns => {
         $pass = hash-password $new;
         my $user = ::?CLASS.^all.grep: { .hashed-password eq $pass AND .username eq $user };
         die without $user;
         $user
      },
      old-columns => {
         ::?CLASS.^all.grep: { .plain-password eq $new AND .username eq $user };
      }
   ;
}

And, before create de new column, it will run the old-column one, after populate the new column it will run the new-column one, and between these, it will run the new-column one... if it throws an exception, it runs the old-column one...

from red.

FCO avatar FCO commented on August 28, 2024 1

Maybe the model could apply the role to itself...

from red.

FCO avatar FCO commented on August 28, 2024

#7 (comment)

Has a new idea that could be used to migrations...

from red.

Xliff avatar Xliff commented on August 28, 2024

I think migrations should be their own thing, especially since migrations are done at the Model level, not the collection one. Collection versioning is a great idea, though!

from red.

Xliff avatar Xliff commented on August 28, 2024

Maybe best explained, Migrations are an operation between two versions of the same model. So how would that best work? Maybe have something that encapsulates models like a collection, but it is NOT a collection of models, but of conversion operations. A big problem would be naming conventions.

Say we have two versions of model A. How would a migration of A:ver<1.2> to A:ver<1.3> be named?

Is something like this possible?

migration A:from<1.2>:to<1.3> { ... }

If so, I have an idea to do this in a very clean manner.

from red.

FCO avatar FCO commented on August 28, 2024

no, it isn't, I think...

what about:

migration A:ver<1.3> {
   method from:ver<1.2> {...}
}

from red.

Xliff avatar Xliff commented on August 28, 2024

Actually, that's not a bad idea. However I was hoping to use methods for individual fields. In that case, field level conversions could use another mechanism.

Try this:

# Pseudo
migration A:ver<1.3> {

  # Specific field-level conversions. Key layout is:
  #  <from_version><model><attribute> => -> $old_model, $to_model { ... }
  has %!conversions;

 has CollectionA<1.2> $!old_a;
 has CollectionA<1.3> $!new_a;

  method from:ver<1.2> {
     # Iterate over each model. 
     for $!old_a.^models Z $!new_a.^models -> ($old_model, $new_model) {
        for $new_model.^attributes -> $attr {
          with %!conversion<1.2>{$new_model}{$attr} {
             $_($old_model, $new_model)
           } else {
             # By default, we move the value from the old model to the new.
             $new_model."&{ $attr }"() = $old_model."&{ $attr }"();
           }
        }
     }
  }
}

from red.

federico-razzoli avatar federico-razzoli commented on August 28, 2024

Note that usually DBMSs provide non-standard SQL syntaxes for migrations, like:

  • CREATE TABLE IF NOT EXISTS
  • DROP TABLE IF EXISTS
  • CREATE OR REPLACE TABLE
  • ALTER TABLE IF EXISTS
  • ALTER TABLE t ADD COLUMN IF NOT EXISTS, DROP COLUMN IF EXISTS, etc

And some databases allow to run CREATE/ALTER/DROP in transactions.

I believe that it would be nice to make use of these features, where available. For example the migration itself could have an on_conflict property that could be "ignore" (if not exists), "replace" (or replace), "fail".

from red.

FCO avatar FCO commented on August 28, 2024

I were wondering about migration and I think I came to something interesting...

If I have 2 versions of the same model, for example:

model Bla:ver<0.1> {
   has Int $.a is column;
}

model Bla:ver<0.2> {
   has Str $.b is column;
}

It’s relatively easy to say that it should create a column b of type string and drop the column a. The problem is try to guess what should be done with the data... if the content of be shoul be generated based on the old data on a, we have a problem, once we dropped a.

We could fix that explaining to the migration how to generate the data. The other migrations that I know manipulate the data using plain SQL. But we already have a way to manipulate data! The AST!

I don’t think it would be impossible to make something like this to generate the data for a new column:

method #`{or sub, idk} migrate:<a> {
   String: { .a * 3 }
}

And it would run a:

UPDATE
   my_table
SET
   b = ‘String: ‘ || a * 3

Or something that would be better for that migration (@Santec, please help me!)

Maybe something should be a bit different because it is possible to a new column on a table can use data from different tables.

Sent with GitHawk

from red.

Xliff avatar Xliff commented on August 28, 2024

@FCO:

This is why I had the %!conversions attribute.

So for something like this situation, you'd have

submethod BUILD {
  %!conversions<0.2><Bla><b> = -> { $new_model<Bla>.b = $old_model<Bla>.a };
}

So %!conversions handles all special casing at the field level.

from red.

FCO avatar FCO commented on August 28, 2024

Do we need the new model? Won’t we always use only the old one?

What about?

migration MySchema:ver:<0.2> {
   has Bla:ver<0.1> $.old-model1;
   has Ble:ver<0.1> $.old-model2;
   has Bla:ver<0.2> $.new-model1;
   has Ble:ver<0.2> $.new-model2;

   method Bla:<a> { { $!old-model1.b } & { $!old-model2.c } }
}

And it would use the return to create the update...

Sent with GitHawk

from red.

FCO avatar FCO commented on August 28, 2024
migration MySchema:ver:<0.2>[Bla:ver<0.2>, Ble:ver<0.2>] {
   has Bla:ver<0.1> $.model1;
   has Ble:ver<0.1> $.model2;

   method Bla:<a> { { $!model1.b } & { $!model2.c } }
}

Sent with GitHawk

from red.

FCO avatar FCO commented on August 28, 2024
migration MySchema:ver:<0.2>[Bla:ver<0.2>, Ble:ver<0.2>] {
   Bla:ver<0.2>.a = { Bla:ver<0.1>.b } & { Bla:ver<0.1>.c }
}

Sent with GitHawk

from red.

FCO avatar FCO commented on August 28, 2024
migration MySchema:ver:<0.2>[Bla:ver<0.2>, Ble:ver<0.2>] {
   method Bla:<a> { { Bla:ver<0.1>.b } & { Bla:ver<0.1>.c } }
}

Sent with GitHawk

from red.

FCO avatar FCO commented on August 28, 2024

https://glot.io/snippets/faikda35yh

Sent with GitHawk

from red.

FCO avatar FCO commented on August 28, 2024
migration MySchema:ver:<0.2>[Bla:ver<0.2>, Ble:ver<0.2>] {
   method Bla:<a> { { .Bla.b } & { .Bla.c } }
}

and $_ is the old migration...

from red.

FCO avatar FCO commented on August 28, 2024

Now I see that it doesn't make sense to use another table here... I don't have a join here...

it should be done with relationship...

from red.

FCO avatar FCO commented on August 28, 2024

So maybe it make sense to have migrations by model...

from red.

FCO avatar FCO commented on August 28, 2024

Now I think I got it!

Bla.^migration: :from<0.1>, {
   .a = .b * 42 + 3;
   .c = .d - .e;
}

from red.

FCO avatar FCO commented on August 28, 2024

any thoughts about it?

from red.

Xliff avatar Xliff commented on August 28, 2024

Yes. That;'s not bad. It accomplishes the basics. However I would prefer if we did migration as a ClassHOW so that we can split complex operations up into encapsulated (self-contained) pieces.

You can still do it with this method, but everything has to be written out.

Give me a few days to think about ways alleviate this issue and if I find any, I will post.

from red.

FCO avatar FCO commented on August 28, 2024

The solution I started implementing do not handle:

  • table rename
  • model rename
  • table population
  • table truncation
  • table deletion

Maybe a solution with a migration type with a collection of models could help with it...

(My next step is creating the migration models to save the state of a migration on the database...)

from red.

FCO avatar FCO commented on August 28, 2024
MacBook-Pro-de-Fernando:Red2 fernando$ perl6 -Ilib -MRed -e '


my $*RED-DB = database "SQLite";
my $*RED-DEBUG = True;
use Red::Migration::Migration;
use Red::Migration::Table;
use Red::Migration::Column;
Red::Migration::Table.^create-table;
Red::Migration::Column.^create-table;
say Red::Migration::Table.^create: |Red::Migration::Table.^migration-hash
'
SQL : CREATE TABLE red_migration_table(
   id integer NOT NULL primary key AUTOINCREMENT,
   name varchar(255) NOT NULL ,
   version varchar(255) NOT NULL ,
   created_at real NOT NULL ,
   migration_id integer NULL references red_migration_version(id),
   UNIQUE (name, version)
)
BIND: []
SQL : CREATE TABLE red_migration_column(
   id integer NOT NULL primary key AUTOINCREMENT,
   name varchar(255) NOT NULL ,
   type varchar(255) NOT NULL ,
   references_table varchar(255) NULL ,
   references_column varchar(255) NULL ,
   is_id integer NOT NULL ,
   is_auto_increment integer NOT NULL ,
   is_nullable integer NOT NULL ,
   table_id integer NULL references red_migration_table(id)
)
BIND: []
SQL : INSERT INTO red_migration_table(
   version,
   created_at,
   name
)
VALUES(
   ?,
   ?,
   ?
)
BIND: ["0", Instant.from-posix(<1643395558931/1058>, Bool::False), "red_migration_table"]
SQL : SELECT
   red_migration_table.id , red_migration_table.name , red_migration_table.version , red_migration_table.created_at as "created-at", red_migration_table.migration_id as "migration-id"
FROM
   red_migration_table
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   red_migration_column.table_id = ?
BIND: [1]
()
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   red_migration_column.table_id = ?
BIND: [1]
()
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   red_migration_column.table_id = ?
BIND: [1]
()
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   red_migration_column.table_id = ?
BIND: [1]
()
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   red_migration_column.table_id = ?
BIND: [1]
()
SQL : INSERT INTO red_migration_column(
   is_auto_increment,
   table_id,
   is_nullable,
   is_id,
   type,
   name
)
VALUES(
   ?,
   ?,
   ?,
   ?,
   ?,
   ?
)
BIND: [Bool::True, 1, Bool::False, Bool::True, "UInt", "id"]
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
SQL : INSERT INTO red_migration_column(
   is_auto_increment,
   is_nullable,
   table_id,
   is_id,
   name,
   type
)
VALUES(
   ?,
   ?,
   ?,
   ?,
   ?,
   ?
)
BIND: [Bool::False, Bool::False, 1, Bool::False, "name", "Str"]
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
SQL : INSERT INTO red_migration_column(
   is_nullable,
   table_id,
   is_auto_increment,
   is_id,
   type,
   name
)
VALUES(
   ?,
   ?,
   ?,
   ?,
   ?,
   ?
)
BIND: [Bool::False, 1, Bool::False, Bool::False, "Version", "version"]
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
SQL : INSERT INTO red_migration_column(
   name,
   type,
   is_id,
   is_auto_increment,
   is_nullable,
   table_id
)
VALUES(
   ?,
   ?,
   ?,
   ?,
   ?,
   ?
)
BIND: ["created_at", "Instant", Bool::False, Bool::False, Bool::False, 1]
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
SQL : INSERT INTO red_migration_column(
   is_auto_increment,
   references_table,
   is_nullable,
   table_id,
   is_id,
   name,
   type
)
VALUES(
   ?,
   ?,
   ?,
   ?,
   ?,
   ?,
   ?
)
BIND: [Bool::False, "red_migration_version", Bool::True, 1, Bool::False, "migration_id", "UInt"]
SQL : SELECT
   red_migration_column.id , red_migration_column.name , red_migration_column.type , red_migration_column.references_table as "references-table", red_migration_column.references_column as "references-column", red_migration_column.is_id as "is-id", red_migration_column.is_auto_increment as "is-auto-increment", red_migration_column.is_nullable as "is-nullable", red_migration_column.table_id as "table-id"
FROM
   red_migration_column
WHERE
   _rowid_ = last_insert_rowid()
LIMIT 1
BIND: []
Red::Migration::Table.new(name => "red_migration_table", version => "0", created-at => 1553303967.936673e0, migration-id => Any)

from red.

FCO avatar FCO commented on August 28, 2024

I’ve been thinking about migration and I agree with something I’ve read somewhere that says a migrations should be split into 5 parts:

  • create new columns as nullable
  • change the code to handle the new column or if null the old one (or the opposite)
  • populate the new columns and unnulable those
  • change the code, remove the handle of the old columns
  • remove the old columns

So, the idea would be make Red Migration run each of this steps. This is what I thought:

model User { # please, do not do that!!!
   has Str $.nick is column;
   has Str $.plain-password is column;
   ...;
   method save-password(Str $new) {
      $.plain-password = $new;
      $.^save
   }
   method auth(Str $nick, Str $pass) {
      ::?CLASS.^all.grep: { .nick eq $nick AND .plain-password eq $pass }
   }
}

This is a website that stores it’s users password in plain text... so it would like to change it to hash it’s password...

model User { # please, do not do that!!!
   has Str  $.nick is column;
   has Str  $.hashed-password is column;
   has Bool $.expired is column;
   ...;
   method save-password(Str $new) {
       if %*RED-MIGRATION<0.1.2> <= BEFORE-START {
         $.plain-password = $new;
      } else {
         $.hashed-password = hash-pass $new;
         $.expired = True;
      }
      $.^save
   }
   method auth(Str $nick, Str $pass is copy) {
      $pass = hash-pass $pass if %*RED-MIGRATION<0.1.2> > CREATED-COLUMNS;
      my $user = ::?CLASS.^all.grep: { .nick eq $nick AND .plain-password eq $pass };
      throw change password if %*RED-MIGRATION<0.1.2> > CREATED-COLUMNS and &user.expired;
      $user
   }
}

And on the migration file:

User.^migration: {
   .expired = True
}

So Red would see that it should create 2 new columns (hashed-password and expired) and create it as nullable. Before that %*RED-MIGRATION<0.1.2> would contain BEFORE-START (assuming this is the version of this migration) after that, it would return CREATED-COLUMNS. After that it would populate the new columns with the new value (in this case True on expired column for every row) and %*RED-MIGRATION<0.1.2> will return POPULATED-COLUMNSnow it can delete the old columns and set DELETED-COLUMNS on the hash.

It step is controlled by a time stamp on the database showing where it should run the next step (and what step it is now, this is from where the value of the hash is gotten).

This time stamp can have a default value changed automatically by Red or manually changed by a Red cli.

I really want to know what you think... please leave a comment!

from red.

Xliff avatar Xliff commented on August 28, 2024

The only problem that I see is that the model would need to contain interim attributes until the migration is complete.

From I am reading (please pardon me if my interpretation is incorrect), but it look slike User.auth has transition code in place.

I don't think it is good practice to have transition code in place. If you are going to do a migration, it should be a discrete operation, and models should be version to handle prior-migration and post-migration state.

from red.

FCO avatar FCO commented on August 28, 2024

But what about the code to handle the 2 versions of the database in the middle?

from red.

FCO avatar FCO commented on August 28, 2024

In my head, after that migration is done, you would commit a new version of your code removing the migration code...

from red.

Xliff avatar Xliff commented on August 28, 2024

But what about the code to handle the 2 versions of the database in the middle?

That would only be a factor during the migration. During this time, I would expect production code to be offline for the upgrade.

Do you know of any situations where this may not be the case?

from red.

Xliff avatar Xliff commented on August 28, 2024

In my head, after that migration is done, you would commit a new version of your code removing the migration code...

Ah. OK. In my head ( 😄 ) I would have such code already written in a branch on the development systems. So there would be the current production, with the old version of the models, and the development version with the next version of the models.

Having code that handles two versions at once may be useful, but can be confusing for devops, at the very least. It also invites bugs that can inadvertently corrupt the data.

from red.

FCO avatar FCO commented on August 28, 2024

Are you suggesting to have downtime just to run a migration? We should find a way to run the migration without downtime...

from red.

FCO avatar FCO commented on August 28, 2024

another possible things would be:

  • a is DEPRECATED-since(:$version, :$step) trait

I think it could help to find code that is using old columns it should be to methods... it should be implicit to old columns attributes...

from red.

FCO avatar FCO commented on August 28, 2024

It should accept a migration name also instead of a migration version number...

from red.

FCO avatar FCO commented on August 28, 2024

The cli to create a new migration after another one could remove every handle-migration and leave only the new-column part of each...

from red.

jonathanstowe avatar jonathanstowe commented on August 28, 2024

Your example has reminded of something I was going to add as a suggestion 👍

from red.

FCO avatar FCO commented on August 28, 2024

And what are you thinking of migrations?

from red.

jonathanstowe avatar jonathanstowe commented on August 28, 2024

Well I'm sort of neutral on the matter. From my experience I'm more familiar with the kind of process outlined by @moritz :)

from red.

FCO avatar FCO commented on August 28, 2024

Maybe

method save-password($new) {
   handle-migration 0.1.2,
      write-new => {
         $!hashed-password = hash-password $new
      },
      write-old => {
         $!plain-password = $new
      }
   ;
   self.^save
}
method auth($login, $pass is copy) {
   handle-migration 0.1.2,
      read-new => {
         $pass = hash-password $new;
         my $user = ::?CLASS.^all.grep: { .hashed-password eq $pass AND .username eq $user };
         die without $user;
         $user
      },
      read-old => {
         ::?CLASS.^all.grep: { .plain-password eq $new AND .username eq $user };
      }
   ;
}
  • before create new column:
    • read => read-old
    • write => write-old
  • between create new column and new column populated and not null
    • read => read-new if exception -> read-old
    • write => write-old, write-new
  • after new column populated
    • read => read-new
    • write => write-new

from red.

FCO avatar FCO commented on August 28, 2024
method auth($login, $pass) {
   handle-migration 0.1.2,
      read-new-return-defined => {
         ::?CLASS.^all.grep: { .hashed-password eq hash-password($new) AND .username eq $user };
      },
      read-old => {
         ::?CLASS.^all.grep: { .plain-password eq $new AND .username eq $user };
      }
   ;
}

Run the read-old if the read-new-return-defined don't return a defined value

from red.

FCO avatar FCO commented on August 28, 2024

When we have the custom column types #141 maybe we can do something like:

use Password;

model User {
   has Str       $.nick is column;
   has Password  $.hashed-password is column{ :migrated-from<plain-password> };
   has Bool      $.expired is column{ :default{ True } };
}

and the :migrated-from would change the .hashed-password method to something like:

multi method hashed-password(::?CLASS:U:) is rw {
   Proxy.new:
      FETCH => {
         handle-migration 0.1.2,
            read-new-return-defined => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "hashed-password").column
            },
            read-old => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "plain-password").column
            }
          ;
      },
      STORE => -> \val { die }
}

multi method hashed-password(::?CLASS:D:) is rw {
   mt \attr = self;
   Proxy.new:
      FETCH => {
         handle-migration 0.1.2,
            read-new-return-defined => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "hashed-password").get_value: attr
            },
            read-old => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "plain-password").get_value: attr
            }
          ;
      },
      STORE => -> \val {
         handle-migration 0.1.2,
            write-new => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "hashed-password").set_value: attr, val
            },
            write-old => {
               ::?CLASS.^attributes.first(*.name.substr(2) eq "plain-password").set_value: attr, val
            }
      }
}

from red.

FCO avatar FCO commented on August 28, 2024

@jonathanstowe I'm planning to do the schema diff with these classes. Red CLI can already create the objects from a database, I'm just implementing the diff algorithm on it.

from red.

FCO avatar FCO commented on August 28, 2024

@Altreus I was thinking on your suggestion for Schema integrated with migrations...

I'm thinking on something like:

1 - for my new project I create my models and my schema class, just listing my models, something like:

use Model1;
use Model2;
schema Schema1 {
    has Model1 $.model1;
    has Model2 $.model2;
    ...
}

2 - on my next change I'll run something like: red create-migration and it will copy all your models code and your Schema to a directory (for example: ./red-migrations/0.0.1) and when you want to use your models

use Schema1;
Schema.model1

the schema would get from the database what's its version and get that version from the directory and return the right model version.

And if can be helpful on multi step migrations as well

from red.

FCO avatar FCO commented on August 28, 2024

@jonathanstowe :

fernando@MBP-de-Fernando Red % sqlite3 bla.db
SQLite version 3.28.0 2019-04-15 14:49:49
Enter ".help" for usage hints.
sqlite> .d
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE bla(id serial);
INSERT INTO bla VALUES(1);
INSERT INTO bla VALUES(1);
INSERT INTO bla VALUES(1);
CREATE TABLE ble(
id serial primary key,
bla_id integer references bla(id),
bli integer unique
);
INSERT INTO ble VALUES(1,1,NULL);
INSERT INTO ble VALUES(2,1,'qwer');
COMMIT;
sqlite> .q
fernando@MBP-de-Fernando Red % perl6 -Ilib -MRed -e '
my $*RED-DB = database "SQLite", :database<./bla.db>;
model Bla { has Str $.id is column }

for Bla.^diff-from-db -> (:$key, :%value) { say $key; say %value.gist.indent: 4 }

'
col-attr
    id => [type => {+ => serial, - => varchar(255)} nullable => {+ => True, - => False}]

from red.

FCO avatar FCO commented on August 28, 2024

@Altreus I was thinking on your suggestion for Schema integrated with migrations...

I'm thinking on something like:

1 - for my new project I create my models and my schema class, just listing my models, something like:

use Model1;
use Model2;
schema Schema1 {
    has Model1 $.model1;
    has Model2 $.model2;
    ...
}

2 - on my next change I'll run something like: red create-migration and it will copy all your models code and your Schema to a directory (for example: ./red-migrations/0.0.1) and when you want to use your models

use Schema1;
Schema.model1

the schema would get from the database what's its version and get that version from the directory and return the right model version.

And if can be helpful on multi step migrations as well

Maybe we can use this for different versions, and for migration's intermediate steps (that should handle both column versions) we could have a role that schema conditionally applies to the model depending on schema version. And when coping to create a new migration, this role should not be copied. Maybe this role should have some way to access the original method of both versions (the previous and the next one).

from red.

FCO avatar FCO commented on August 28, 2024

#7 (comment)
Maybe Schema should be a CompUnit::Repository

from red.

FCO avatar FCO commented on August 28, 2024

This is the first test of how a multi step migration should work.

fernando@MBP-de-Fernando Red % sqlite3 bla.db 
SQLite version 3.28.0 2019-04-15 14:49:49
Enter ".help" for usage hints.
sqlite> .d
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE bla(id serial);
INSERT INTO bla VALUES(1);
INSERT INTO bla VALUES(1);
INSERT INTO bla VALUES(1);
CREATE TABLE ble(
id serial primary key,
bla_id integer references bla(id),
bli integer unique
);
INSERT INTO ble VALUES(1,1,NULL);
INSERT INTO ble VALUES(2,1,'qwer');
COMMIT;
sqlite> .q
fernando@MBP-de-Fernando Red % cat test/Ble.pm6 
use Red;

#| Table: ble
unit model Ble;

has UInt $.id is id;
has Int $.bla-id is referencing{
    :model<Bla>,
    :column<id>
};
has Int $.bli is column;

has $.bla is relationship(
    { .bla-id },
    :model<Ble>
);


fernando@MBP-de-Fernando Red % perl6 -Itest bin/red migration-plan --model=Ble --driver=SQLite --database=./bla.db
Step 1
    ALTER TABLE ble ADD bli integer
    ALTER TABLE ble ADD id integer
Step 2
    ALTER TABLE ble ALTER COLUMN bli integer NOT NULL
    ALTER TABLE ble ALTER COLUMN id integer NOT NULL PRIMARY KEY
Step 3
    ALTER TABLE ble DROP COLUMN bla_id
    ALTER TABLE ble DROP COLUMN bli

from red.

FCO avatar FCO commented on August 28, 2024

If I'm migrating from 2 columns ('first-name' and 'last-name') to a single column ('complete-name') (for example), and we've defined the migration .complete-name = "{ .first-name } { .last-name }" we could use that same code to change the obj just before save while we are still using the old columns but also saving the new one and the opposite when we start using the new value but still save the old one...

from red.

Altreus avatar Altreus commented on August 28, 2024

I think of the highest importance in all of this is that the migrations are SQL files, in a directory. However you want to do your version management, that's up to the user, but it's of vital importance that the update is not calculated at runtime, because a bug in this could be insurmountable.

If you use the tool to output SQL, then let the developer edit the SQL if necessary, then have the deployment system run the SQL, this is basically perfect, because the developer has ultimate control over it, but the ORM has the ability to create that SQL in the first place, and in the best case requires no interference afterwards. However, it is important that interference is an option.

I've used DBIx::Class::DeploymentHandler in the past and what I like the most about it is that it just does it.

Here's how it basically works:

  • Read the class definitions to discover the database structure
  • Write out some YAML files that describe the DB in a storable way
  • Turn the YAML into SQL and call it a deployment

Later

  • Read the class definitions to discover the new database structure
  • Write out the YAML files again
  • Compare the structures of the YAMLs to create a diff
  • Output SQL to perform this diff

It stores the current version of the schema in its own table in the database. This does several things that I like:

  • Migration information is not in the application code
  • Multiple databases can be supported because the YAML is an interim serialisation format
  • The migrations are SQL files that you can read and run
  • You can edit or add SQL files for each migration
  • The database knows what version it is on

DBIx::Class::DeploymentHandler does have a few warts and a questionable architecture but that's because it's trying to do too much, in my opinion. It could be simplified, especially since we're talking about having it be part of Red, but I honestly think the most elegant solution is the simplest: put SQL files in your project.

from red.

FCO avatar FCO commented on August 28, 2024

I like the option of Red generate the SQL and the developer be able to review/edit it. But is the advantage of the yaml instead of saving it directly on Red's syntax?

from red.

Altreus avatar Altreus commented on August 28, 2024

There is no specific advantage to YAML. They use it because it is convenient. I think we can avoid this step. I convinced myself the step is valuable!

The YAML is used in order to accurately diff the two versions of the schema. They convert the schema to an internal format which makes it easier to diff.

The YAML is simply a serialisation of this data structure. This helps, because it means you can jump directly to this format for the previous version, instead of trying to figure it out from the code. Note that the previous version of the schema probably only exists in git. Therefore, they put it in the project directory for later!

Any format will do.

from red.

FCO avatar FCO commented on August 28, 2024

On Red we can use objs of this types. They were made with that content in mind.

from red.

Altreus avatar Altreus commented on August 28, 2024

That would be super.

Here's my proposal then.

  • Version 1 (or zero, doesn't matter to me) is a deployment. We create the SQL file for the whole database and serialise all the "types" that you mentioned
  • Version 2 and onwards is a migration. We read the previous serialised types, compare to the current types, and output SQL and serialise the new types.
  • When a migration is performed:
    • We read the version from a special table. If it's not there we deploy, and put 1 in the version table. If that goes wrong we tell the user off.
    • If it is there we start with that version and run each successive version's SQL until we run out.

How is this sounding so far?

from red.

FCO avatar FCO commented on August 28, 2024

Why store the actual version of the database schema? Why not just compare tour classes with the current database schema?

from red.

Altreus avatar Altreus commented on August 28, 2024

You need to be able to run all the migrations between version X and the latest, because migrations might not simply be to update the schema; they might have data migrations in between as well. Therefore, you need to be able to discover what X is.

You might as well store a serialised version of the data structures used for comparing, essentially as a cache and a historical record.

from red.

FCO avatar FCO commented on August 28, 2024

I just thought a way that could work that I think would be perfect to my way of working I'm not sure if it would be good to everyone, but I'll describe it here and please let me know what you think. We may generalise that...

The user creates the models the way they like, then run the command:

red migrate update # may need to define what models to use

it will update the DB to reflect the models, read the DB after the changes, generate and save Red::Migration::* objects on a migrations.sqlite file.

after that, when the user is sure that's how the DB should look, they can run:

red migrate prepare

it will create 2 new files migrations/1/up.sql with the SQL used to create the tables and migrations/1/down.sql with drop tables.

then on the target system, the user can run

red migrate apply

it will create the DB using the SQL on migrations/1/up.sql. And it will also create a table to store the DB version

When the user wants to change something it edits the models and then run:

red migrate update

it will validate if the DB is still in sync with the Red::Migration::* objects stored on migrations.sql if not, break. If everything is ok, it will calculate the diff between the DB and the model, and update the DB, read the DB to generate the Red::Migrate::* objects and save them on migrations.sqlite.

The user can create new changes and run red migrate prepare as many times as needed, it will always validate the database with the last stored version on migrations.sqlite and update it (and the DB itself).

When the use is happy, they can run:

red migrate prepare

that will get the diff between the last version on migrations.sqlite and the previous one, generate the file migrations/2/up.sql with the SQL applying that diff and migrations/2/down.sql with the SQL applying the diff between the previous and the last version.

Running on the target system:

red migration apply

Will run the migrations/2/up.sql and increment the version number and running:

red migration unapply

Would run migrations/2/down.sql and decrement the version.

it should also be possible to run:

red migrate update -e 'MyModel.^migration: { .new-column = .old-col1 . " " . .old-col2 }'

that would do the diff and also include on SQL something like:

update my_model set new_column = old_col1 || " " || old_col2; -- probably doing that on batches, but you got it...

it should also be possible to run something like:

red migrate update --sql 'ALTER TABLE my_model ...'

and that would apply the SQL on the local database and then read it to create the Red::Migration::* objects on migrations.sqlite.

After a red migrate update the user can run red migrate downgrade to return the DB to the previous state.

After a red migrate prepare, the user can modify the generated SQL and call red migrate update-from-prepare that will grab the modified SQL, and run red migrate downgrade && red migrate update --sql-file <modified sql>.

The generated SQL files may need SHA1 to know if they were changed (it could be stored on migrations.sqlite).

The dir where we store the SQL should also be different fo what driver we are using...

When validating if the DB is still in sync, if that's not, it could/should be possible to generate a intermediate version (if an option is passed, something like --ignore-async) to make it work.

when running update, prepare and apply if it detects that it's going to lose data, it should break. To run that anyway, it should need a very verbose option like: --i-am-ok-on-losing-data.

Please let me know if there are any comments/suggestions...

from red.

FCO avatar FCO commented on August 28, 2024

of course all command/sub-command names can be changed...

from red.

FCO avatar FCO commented on August 28, 2024

As Voldenet suggested on #raku irc channel, it can be used by many different on git... maybe, instead of using ints for path on migrations, we could use UUID to avoid merge conflicts, but we would need to find another way to decide order.

from red.

voldenet avatar voldenet commented on August 28, 2024

I think there should be red migrate prepare command that would generate commands needed to migrate current db in high level language:

migration v2 {
   method created-on { "2024-03-07T23:42:03.430480Z" } # used for order of migrations
   # method using high-level language of operations to perform
   method update { 
       .remove-column(:table<user> :name<type>);
       .create-table({
            .set-name('admin');
              # Guid will get converted into uniqueidentifier, char(36) or something supported by current db
            .add-column(:type(Guid) :name<userId>);
       });
   }
   # method describing changes to data model, this lets Red assume what this migration does
   # even if performed operations are different
   method Red-update {
       .remove-column(:table<user> :name<type>);
       .create-table({
            .set-name('admin');
            .add-column(:type(Guid) :name<userId>);
       });
   }
   method revert { ... }
   method Red-revert { ... }
}

After the migration is generated and applied to current database, user can review this high-level-language change, reorder it and replace the scripting (in the example, users stop having "type" indicating admins and get moved into table containing admins):

migration v2 {
   method created-on { "2024-03-07T23:42:03.430480Z" } # used for order of migrations
   method upgrade { 
       # user can reorder schema modifications
       .create-table({
            .set-name('admin');
            # user can choose data type explicitly, which is useful for floats of different precision
            .add-column(:type("uniqueidentifier") :name<userId>);
       });
       # user can add sql code to move admins to the new table
       .sql('insert into admin(userId) select id from user where type = 1');
       # user can choose to risk not removing the column,
       # in which case Red would assume, that this column is considered as removed
       # .remove-column(:table<user> :name<type>);
   }
   # this mustn't change
   method Red-upgrade {
       .remove-column(:table<user> :name<type>);
       .create-table({
            .set-name('admin');
            .add-column(:type(Guid) :name<userId>);
       });
   }
   method revert { ... }
   method Red-revert { ... }
}

With the above code, red migrate generate-script should generate the following (example for mssql):

CREATE TABLE admin ([userId] uniqueidentifier);
insert into admin(userId) select id from user where type = 1;
INSERT INTO [$migrations]([Name], [Date]) VALUES ('v2', SYSDATETIME());

from red.

FCO avatar FCO commented on August 28, 2024

I'm not sure if I got the Red-update. We will have the last representation of the models stored on migrations.sqlite and the current one as a .rakumod file (the code itself). Do we need also store steps to change the model? Why not just change the file?

from red.

voldenet avatar voldenet commented on August 28, 2024

A few notes regarding my previous post:

  • migration should be probably named by user, then the path could be /2/some-name.migration
  • created-on could be version number taken from the folder name - it would force users to always resolve them manually (by moving migrations to another path by hand)
  • revert operation may not always be possible is unsupported trait could be useful
  • Some migrations may require additional computation:
    • db has https://rest-api/entities/42 and wants to change the format to identifier taken from that url for every entity
    • db stores json in nvarchar(*) fields and database has no support for json

from red.

voldenet avatar voldenet commented on August 28, 2024

Regarding Red-*:
This lets you ask for model at any version, which makes any tooling, bugfixing and analysis easier. I could think of keeping file with migration code separate from calculated model changes (/2/some-name.rakumod, /2/some-name.up, /2/some-name.down), so this doesn't get in the way too much.

The current-model.rakumod approach would work too, but there would be no way to tell model versions in the past (unless all the versions were stored in git). I'd say that instead of sqlite, plaintext could be used to keep merge conflict resolutions easier.

from red.

FCO avatar FCO commented on August 28, 2024

if we are going that path, I was thinking about that "class"... I am thinking on something like this:

migration CreateAdminDropTypeFrmUser:ver<2> {
   method up {
       User.^migration: *.drop-column("type");
       Schema.^migration: {                                         # There is no model for that, then we need use schema
            .create-table("admin", { userId => %( :type<Guid> ) });
       });
   }

   method down {
      User.^migration: *.create-column("type", :type<text>);        # maybe, when deleting on the local DB
                                                                    # we could store the data somewhere in case we
                                                                    # want undo it (this down method)

      Schema.^migration: *.drop-table: "admin";                     # again, no model, so we need to use schema
   }
}

from red.

FCO avatar FCO commented on August 28, 2024

I'd say that instead of sqlite, plaintext could be used to keep merge conflict resolutions easier.

That makes sense

from red.

voldenet avatar voldenet commented on August 28, 2024

The last example is nice, would require keeping versioned entities (/migrations/entities/User/1/entity.raku), grouping them for migration:

class Schema:ver<2> does RedSchema { has @.models = ("Model1:ver<1>", "Model2:ver<3>", "Model3:ver<5>") }

and then having use RedSchema inside target migration. Separate per-entity versioning would reduce need to duplicate unchanged entities in migrations.

from red.

FCO avatar FCO commented on August 28, 2024

Each migration dir may have a Schema.rakumod file where will have:

use RedSchema;

red-schema "Model1:ver<1>", "Model2:ver<3>", "Model3:ver<5>";

and on migration file we would use it and it would import those models on those versions and a \Schema with the schema object.

from red.

FCO avatar FCO commented on August 28, 2024

No, it would need to be something like:

use RedSchema ("Model1:ver<1>", "Model2:ver<3>", "Model3:ver<5>");

from red.

FCO avatar FCO commented on August 28, 2024

I've been wondering... would it make sense to on updating local db, save the dump of the DB to be able to restore it in case the user want to revert it?

from red.

FCO avatar FCO commented on August 28, 2024

We will probably need our own version of META6.provides. A file can contain multiple models. And maybe we shouldn't use the model version as path to avoid merge conflicts...
Also, would it make sense to make it possible to use resources for that in case the application is installed as Raku module.

from red.

FCO avatar FCO commented on August 28, 2024

Should we have a different module called Red::Migration instead of doing that inside Red?

from red.

FCO avatar FCO commented on August 28, 2024

We'll probably need a configuration file... should we use Configuration?

from red.

FCO avatar FCO commented on August 28, 2024

Not sure if we are going to use Configuration, but I'm here just thinking out loud what we will probably need:

class MigrationConfig {
    has UInt  $.current-version = self!find-current-version;
    has UInt  %.model-current-version;
    has IO()  $.base-path where *.add("META6.json") ~~ :f = $*CWD;
    has IO()  $.migration-base-path = $!base-path.add: "migrations";
    has IO()  $.dump-path = $!base-path.add: ".db-dumps";
    has IO()  $model-storage-path = $!migration-base-path.add: "models";
    has IO()  $.version-storage-path = $!migration-base-path.add: "versions";
    has Str() $.sql-subdir = "sql";
    has Str() @.drivers = <SQLite Pg>;

    has %!versions-cache = self!compute-version-cache;
    has %!models-cache = self!compute-model-cache;


    method !compute-version-cache {...}
    method !compute-models-cache {...}
    method !find-current-version {...}
    method !random-string {...}

    multi method version-path($version) {
        $!version-storage-path.add: ...
    }

    multi method version-path {
        $.version-path: $!current-version
    }

    method new-model-version(Str $model-name) {
        ++%!current-model-version{$model-name}
    }

    multi method model-path(Str() $model-name) {
        $.model-path: $model-name, %!model-current-version{$model-name}
    }

    multi method model-path(Str() $model-name, UInt $version) {
        $.model-version-path: $model-name, $version
    }

    multi method model-version-path(Str() $model-name, UInt() $version) {
        $!model-storage-path.add: %!models-cache{$model-name}{$version}
    }

    multi method model-version-path(Str() $model-name-version) {
        if $model-name-version ~~ /^$<name>=[<[\w:]>*\w] ":" ["ver<" ~ ">" $<ver>=[\d+]|\w+ "<" ~ ">" \w+]* % ":"/ {
            $.model-version-path: $<name>, $<ver> // 0
    }

    multi method migration-sql-path(Str $driver where @!drivers.one) {
        $.version-path.add($!sql-subdir).add: $driver
    }

    multi method migration-sql-path(Str $driver where @!drivers.one, UInt $version) {
        $.version-path($version).add($!sql-subdir).add: $driver
    }
}

from red.

arunvickram avatar arunvickram commented on August 28, 2024

We could use Configuration for this, based on what I'm looking at, I like where this is going

from red.

FCO avatar FCO commented on August 28, 2024

I had a few time now and started playing with that idea... I'm still only planning what to do... but here how it looks now:

image

on branch https://github.com/FCO/Red/tree/red-migration

from red.

FCO avatar FCO commented on August 28, 2024
image image

from red.

FCO avatar FCO commented on August 28, 2024

The way I'm seeing it now is:

  • update: updates your local database based on your current model
  • prepare: create new SQLs needed to make your old model become your current local DB schema
  • apply: apply those SQLs to your production DB

from red.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.