Coder Social home page Coder Social logo

go-sql-spanner's Introduction

go-sql-spanner

go.dev reference

Google Cloud Spanner driver for Go's database/sql package.

import _ "github.com/googleapis/go-sql-spanner"

db, err := sql.Open("spanner", "projects/PROJECT/instances/INSTANCE/databases/DATABASE")
if err != nil {
    log.Fatal(err)
}

// Print tweets with more than 500 likes.
rows, err := db.QueryContext(ctx, "SELECT id, text FROM tweets WHERE likes > @likes", 500)
if err != nil {
    log.Fatal(err)
}
defer rows.Close()

var (
    id   int64
    text string
)
for rows.Next() {
    if err := rows.Scan(&id, &text); err != nil {
        log.Fatal(err)
    }
    fmt.Println(id, text)
}

Statements

Statements support follows the official Google Cloud Spanner Go client style arguments as well as positional paramaters.

Using positional patameter

db.QueryContext(ctx, "SELECT id, text FROM tweets WHERE likes > ?", 500)

db.ExecContext(ctx, "INSERT INTO tweets (id, text, rts) VALUES (?, ?, ?)", id, text, 10000)

Using named patameter

db.ExecContext(ctx, "DELETE FROM tweets WHERE id = @id", 14544498215374)

Transactions

  • Read-write transactions always uses the strongest isolation level and ignore the user-specified level.
  • Read-only transactions do strong-reads by default. Read-only transactions must be ended by calling either Commit or Rollback. Calling either of these methods will end the current read-only transaction and return the session that is used to the session pool.
tx, err := db.BeginTx(ctx, &sql.TxOptions{}) // Read-write transaction.

tx, err := db.BeginTx(ctx, &sql.TxOptions{
    ReadOnly: true, // Read-only transaction using strong reads.
})

conn, _ := db.Conn(ctx)
_, _ := conn.ExecContext(ctx, "SET READ_ONLY_STALENESS='EXACT_STALENESS 10s'")
tx, err := conn.BeginTx(ctx, &sql.TxOptions{
    ReadOnly: true, // Read-only transaction using a 10 second exact staleness.
})

DDL Statements

DDL statements are not supported in transactions per Cloud Spanner restriction. Instead, run them on a connection without an active transaction:

db.ExecContext(ctx, "CREATE TABLE ...")

Multiple DDL statements can be sent in one batch to Cloud Spanner by defining a DDL batch:

conn, _ := db.Conn(ctx)
_, _ := conn.ExecContext(ctx, "START BATCH DDL")
_, _ = conn.ExecContext(ctx, "CREATE TABLE Singers (SingerId INT64, Name STRING(MAX)) PRIMARY KEY (SingerId)")
_, _ = conn.ExecContext(ctx, "CREATE INDEX Idx_Singers_Name ON Singers (Name)")
// Executing `RUN BATCH` will run the previous DDL statements as one batch.
_, _ := conn.ExecContext(ctx, "RUN BATCH")

See also the batch DDL example.

Examples

The examples directory contains standalone code samples that show how to use common features of Cloud Spanner and/or the database/sql package. Each standalone code sample can be executed without any prior setup, as long as Docker is installed on your local system.

Raw Connection / Specific Cloud Spanner Features

Use the Conn.Raw method to get access to a Cloud Spanner specific connection instance. This instance can be used to access Cloud Spanner specific features and settings, such as mutations, read-only staleness settings and commit timestamps.

conn, _ := db.Conn(ctx)
_ = conn.Raw(func(driverConn interface{}) error {
    spannerConn, ok := driverConn.(spannerdriver.SpannerConn)
    if !ok {
        return fmt.Errorf("unexpected driver connection %v, expected SpannerConn", driverConn)
    }
    // Use the `SpannerConn` interface to set specific Cloud Spanner settings or access
    // specific Cloud Spanner features.

    // Example: Set and get the current read-only staleness of the connection.
    _ = spannerConn.SetReadOnlyStaleness(spanner.ExactStaleness(10 * time.Second))
    _ = spannerConn.ReadOnlyStaleness()

    return nil
})

See also the examples directory for further code samples.

Emulator

See the Google Cloud Spanner Emulator support to learn how to start the emulator. Once the emulator is started and the host environmental flag is set, the driver should just work.

$ gcloud beta emulators spanner start
$ export SPANNER_EMULATOR_HOST=localhost:9010

Spanner PostgreSQL Interface

This driver works specifically with the Spanner GoogleSQL dialect. For the Spanner PostgreSQL dialect, any PostgreSQL driver that implements the database/sql interface can be used in combination with PGAdapter.

For example, the pgx driver can be used in combination with PGAdapter: https://github.com/GoogleCloudPlatform/pgadapter/blob/postgresql-dialect/docs/pgx.md

Troubleshooting

The driver will retry any Aborted error that is returned by Cloud Spanner during a read/write transaction. If the driver detects that the data that was used by the transaction was changed by another transaction between the initial attempt and the retry attempt, the Aborted error will be propagated to the client application as an spannerdriver.ErrAbortedDueToConcurrentModification error.

Our libraries are compatible with at least the three most recent, major Go releases. They are currently compatible with:

  • Go 1.20
  • Go 1.19
  • Go 1.18
  • Go 1.17

Authorization

By default, each API will use Google Application Default Credentials for authorization credentials used in calling the API endpoints. This will allow your application to run in many environments without requiring explicit configuration.

Contributing

Contributions are welcome. Please, see the CONTRIBUTING document for details.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Contributor Code of Conduct for more information.

go-sql-spanner's People

Contributors

dependabot[bot] avatar diptanshumittal avatar google-cloud-policy-bot[bot] avatar hengfengli avatar larkee avatar olavloite avatar rahul2393 avatar rakyll avatar release-please[bot] avatar renovate-bot avatar sagebee avatar simamumu avatar skuruppu avatar testwill avatar tmc avatar zchee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-sql-spanner's Issues

How to register Spanner Driver to sqltrace?

Thanks for stopping by to ask us a question! Please make sure to include:

import (
  spanner ""github.com/googleapis/go-sql-spanner""
  sqltrace "gopkg.in/DataDog/dd-trace-go.v1/contrib/database/sql"
  sqlxtrace "gopkg.in/DataDog/dd-trace-go.v1/contrib/jmoiron/sqlx"
)

type DB struct {
...
}

func NewDB(spannerDB string) (DB, error) {
  // i see that connectors must be initialized - https://github.com/googleapis/go-sql-spanner/blob/main/driver.go#L61
  // but i cannot instantiate this because connectors is private to the module 
  sqltrace.Register("spanner", &spanner.Driver{})
  db, err := sqlxtrace.Connect("spanner", spannerDB)
  if err != nil {
    ...
  }
  ...
}
  • Any error messages you're getting
panic: assignment to entry in nil map

goroutine 1 [running]:
github.com/googleapis/go-sql-spanner.newConnector(0xc000256870, {0xc00005a00d, 0x4b})
	external/com_github_googleapis_go_sql_spanner/driver.go:220 +0x9f2
github.com/googleapis/go-sql-spanner.(*Driver).OpenConnector(0x39d948?, {0xc00005a00d?, 0xc000256870?})
	external/com_github_googleapis_go_sql_spanner/driver.go:83 +0x1e
gopkg.in/DataDog/dd-trace-go.v1/contrib/database/sql.Open({0x16b0b06, 0x7}, {0xc00005a00d, 0x4b}, {0x0, 0x0, 0x0})
	external/in_gopkg_datadog_dd_trace_go_v1/contrib/database/sql/sql.go:228 +0xc7
gopkg.in/DataDog/dd-trace-go.v1/contrib/jmoiron/sqlx.Open({0x16b0b06, 0x7}, {0xc00005a00d?, 0xc0001188c0?}, {0x0?, 0xc0001188c0?, 0x138c3e0?})
	external/in_gopkg_datadog_dd_trace_go_v1/contrib/jmoiron/sqlx/sql.go:24 +0x31
gopkg.in/DataDog/dd-trace-go.v1/contrib/jmoiron/sqlx.Connect({0x16b0b06?, 0x7?}, {0xc00005a00d?, 0xc000256870?})
	external/in_gopkg_datadog_dd_trace_go_v1/contrib/jmoiron/sqlx/sql.go:46 +0x2f
...

PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.

panic on DSN that does not match regexp

If, for example, the DSN has project instead of projects (such as "project/p/instances/i/databases/d") this library panics as it tries to access the matches in the extractConnectorConfig() function.

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.


  • Check this box to trigger a request for Renovate to run again on this repository

Convert a value using the `Value()` method of my defined type

Since I don't know whether this is a bug or a specification, I create an issue as a feature request.

Is your feature request related to a problem? Please describe.

Even if my defined type with Value() method, I cannot Insert data.

Here is a small sample of my problem.

  • I have some time data in my code of type time.Time.
  • However, in spanner, the data is stored as INT64 type unixtime, so I need to convert it.
  • I have implemented aValue() method, but the current code does not convert value, so it is not inserted correctly.
type unixtime struct {
	Time time.Time `spanner:"unixtime"`
}

func (t *unixtime) Scan(value interface{}) error {
	sec, ok := value.(int64)
	if !ok {
		return errors.New("Invalid type")
	}
	*t = unixtime{Time: time.Unix(sec, 0)}
	return nil
}

func (t unixtime) Value() (driver.Value, error) {
	return t.Time.Unix(), nil
}

func testMytype(projectId, instanceId, databaseId string) error {
	ctx := context.Background()
	db, err := sql.Open("spanner", fmt.Sprintf("projects/%s/instances/%s/databases/%s", projectId, instanceId, databaseId))
	if err != nil {
		return fmt.Errorf("failed to open database connection: %v\n", err)
	}
	defer db.Close()

	// Insert data
	ut := unixtime{Time: time.Date(2022, 4, 1, 12, 0, 0, 0, time.Local)}
	_, err = db.ExecContext(
		ctx, "INSERT INTO test_table (unixtime) VALUES (?)", t,
	)
	return err
}

The current code returns the following error in this example.

spanner: code = "InvalidArgument", desc = "Value has type STRUCT<unixtime TIMESTAMP> which cannot be inserted into column unixtime, which has type INT64 [at 1:43]\\nINSERT INTO test_table (unixtime) VALUES (@p1)

Such examples are not limited to unixtime, but can occur with all my defined type

Describe the solution you'd like

This can be solved by rewriting as follows

go-sql-spanner/driver.go

Lines 676 to 679 in 27f647b

if valuer, ok := value.Value.(driver.Valuer); ok {
_, err := valuer.Value()
return err
}

to

	if valuer, ok := value.Value.(driver.Valuer); ok {
		sv, err := valuer.Value()
		if err != nil {
			return err
		}
		value.Value = sv
	}

(Implemented in #139)

  • Takes the return value of valuer.Value() and assigns it to value.Value.
  • Do not return early, but continue with following processing (type check of value.Value)

If the Value() method is implemented and the return value is of an allowed type, new code can convert my defined type.

I try the sample again with the new code, the values are correctly inserted into the spanner.

image

Describe alternatives you've considered

The unixtime example is so simple. It can be achieved with a conversion in code.
Bad, for more complex cases, the Value() method is appropriate.

Support positional parameters

Cloud Spanner expects parameters to be supplied as named parameters in the format @name. Many examples and frameworks use ? as a parameter placeholder. As the go-sql-spanner driver is parsing the sql strings in search for parameters anyways, the driver could also easily support positional parameters in addition to named parameters, by automatically renaming the positional parameters into named parameters.

Parallelize integration tests

The integration tests should all run in parallel, unless a specific test for whatever reason cannot run in parallel.

CheckNamedValue: *int is not target?

  • What you're trying to do

When I developed my application using this go-sql-spanner, I found that *int is not included in CheckNamedValue.
I saw the code of CheckNamedValue(https://github.com/googleapis/go-sql-spanner/blob/v1.0.0/driver.go#L692-L693).
This function check the type of the sql-args. If the arg is the unsupported type, the error will return.

	switch t := value.Value.(type) {
	default:
		// Default is to fail, unless it is one of the following supported types.
		return spanner.ToSpannerError(status.Errorf(codes.InvalidArgument, "unsupported value type: %v", t))
	case nil:
	case sql.NullInt64:
	case sql.NullTime:
	case sql.NullString:
	case sql.NullFloat64:
	case sql.NullBool:
	case sql.NullInt32:
	case string:
	case spanner.NullString:
	case []string:
	case []spanner.NullString:
	case *string:
	case []*string:
	case []byte:
	case [][]byte:
	case int:
	case []int:
	case int64:
	case []int64:
	case spanner.NullInt64:
	case []spanner.NullInt64:
	case *int64:
	case []*int64:
	case bool:
	case []bool:
	case spanner.NullBool:
	case []spanner.NullBool:
	case *bool:
	case []*bool:

It looks like type and pointer(e.g. string & *string, int64 & *int64, bool & *bool) are supported.
However, int is only supported int, []int, not *int.
Is this what you expected? Or is it a bug? I would like to ask about this.
Or is this because the Cloud Spanner supports the only following data-types?
https://cloud.google.com/spanner/docs/reference/standard-sql/data-types#storage_size_for_data_types
(Spanner support int64, however, doesn't support int itself)

If this is a bug, I can send a PR to fix this.

  • What code you've already tried

My test code is following.

type User struct {
	Id          string    `json:"id"`
	UserId      string    `json:"user_id"`
	CharacterId int       `json:"character_id"`
}

type userRepo struct {
	UserDB      *sql.DB
	Id          string    `json:"id"`
	UserId      string    `json:"user_id"`
	CharacterId int       `json:"character_id"`
}

func (u *userRepo) InsertUser(ctx context.Context, user *User) error {
	stmt, err := u.UserDB.PrepareContext(ctx, "INSERT INTO users (id, user_id, character_id) VALUES (@id, @user_id, @character_id")
	if err != nil {
		logger.Error("db.Prepare insert UserCharacter failed", zap.Error(err))
		return err
	}
	defer stmt.Close()
	// here, pass the pointer(*int, not int)
	_, err = stmt.ExecContext(ctx, &user.Id, &user.UserId, &user.CharacterId)
	if err != nil {
		logger.Error("Insert UserCharacter failed", zap.Error(err))
		return err
	}
}

If I rewrite the code from stmt.ExecContext(ctx, &user.Id, &user.UserId, &user.CharacterId) to stmt.ExecContext(ctx, user.Id, user.UserId, user.CharacterId), the error doesn't happen.

  • Any error messages you're getting

Insert User failed: sql: converting argument $3 type: spanner: code = "InvalidArgument", desc = "unsupported value type: 0x140005bee60"

0x140005bee60 is an address for 1(= &user.CharacterId).

Standardize error messages and codes

The sql driver currently returns different types of errors. It would be preferable to return the same type of errors from the sql driver as from the client library.

Discussion: Allow DDL statements in transactions?

The initial (and current) implementation of the go-sql-spanner driver allows DDL statements to be executed using a transaction. That is however slightly confusing, as:

  1. The DDL statement is not really executed as part of the transaction, as Cloud Spanner does not support that. This means that the DDL statement will not be affected by any Commit or Rollback of the transaction, and the changes that are made by a DDL statement in a transaction are directly visible to all other transactions after it has been executed.
  2. This (currently) also means that DDL statements can be executed as part of a read-only transaction.

My personal opinion is that we should disallow both the above as the statement is not part of the transaction. Accepting the statement and executing it outside of the transaction will probably cause more confusion than if we were to return an error in such a case.
Note that a user can always execute a DDL statement when a transaction is active, but they will need to do so directly on the database instead of using a transaction.

query := "CREATE TABLE Singers (SingerId INT64, FirstName STRING(100), LastName STRING(100)) PRIMARY KEY (SingerId)"

// Start a (read/write) transaction.
tx, err := db.BeginTx(context.Background(), &sql.TxOptions{})
if err != nil {
  t.Fatal(err)
}

// This currently works but effectively executes the statement outside of the transaction.
// The question is whether this should always return an error.
res, err := tx.ExecContext(context.Background(), query)

// It would always be possible to execute the following statement, also while the above transaction is
// still active. (Note the difference: The above statement uses `tx`, the below statement uses `db`)
res, err := db.ExecContext(context.Background(), query)

What are your thoughts on this?

Benchmarks

The performance of common operations should be benchmarked and checked against the performance of the same operations using the normal Spanner client.

Allow adding session pool configuration to the connection string

It should be possible to set min/max number of sessions in the pool for a connector using the connection string, to allow client applications to create a consistent configuration of max number of sessions in the Spanner session pool and max number of connections in the database/sql connection pool.

Retry aborted transactions

Aborted read/write transactions should be retried using a checksum based retry algorithm. A read/write transaction must keep track of all the results it sees during the transaction. That is:

  • The statements that were executed
  • Update counts for DML statements
  • A checksum of all results that have been returned by queries
  • All errors that have occurred during the execution of a statement

If the transaction is aborted by Cloud Spanner, the driver should internally retry the transaction on a new transaction using the same statements. If all statements return exactly the same results, the transaction should be allowed to continue execution. If the results differ from the initial attempt, an AbortedDueToConcurrentModification error should be propagated to the client application.

[Policy Bot] found one or more issues with this repository.

Policy Bot found one or more issues with this repository.

  • Default branch is 'main'
  • Branch protection is enabled
  • Renovate bot is enabled
  • Merge commits disabled
  • There is a CODEOWNERS file
  • There is a valid LICENSE.md
  • There is a CODE_OF_CONDUCT.md
  • There is a CONTRIBUTING.md
  • There is a SECURITY.md

Use StatementBasedTransactions

The Spanner Go client now supports StatementBasedTransactions. This removes the need for specifying a function for the transaction that is currently used in the go-sql-spanner driver.

TestMinSessions failure

2021/11/02 05:24:47 Integration tests skipped in -short mode.
--- FAIL: TestMinSessions (0.06s)
    driver_with_mockserver_test.go:1890: session creation count mismatch
         Got: 2
        Want: 10

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.