Coder Social home page Coder Social logo

pgx's Introduction

Go Reference Build Status

pgx - PostgreSQL Driver and Toolkit

pgx is a pure Go driver and toolkit for PostgreSQL.

The pgx driver is a low-level, high performance interface that exposes PostgreSQL-specific features such as LISTEN / NOTIFY and COPY. It also includes an adapter for the standard database/sql interface.

The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers, proxies, load balancers, logical replication clients, etc.

Example Usage

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/jackc/pgx/v5"
)

func main() {
	// urlExample := "postgres://username:password@localhost:5432/database_name"
	conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
	if err != nil {
		fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
		os.Exit(1)
	}
	defer conn.Close(context.Background())

	var name string
	var weight int64
	err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
	if err != nil {
		fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)
		os.Exit(1)
	}

	fmt.Println(name, weight)
}

See the getting started guide for more information.

Features

  • Support for approximately 70 different PostgreSQL types
  • Automatic statement preparation and caching
  • Batch queries
  • Single-round trip query mode
  • Full TLS connection control
  • Binary format support for custom types (allows for much quicker encoding/decoding)
  • COPY protocol support for faster bulk data loads
  • Tracing and logging support
  • Connection pool with after-connect hook for arbitrary connection setup
  • LISTEN / NOTIFY
  • Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings
  • hstore support
  • json and jsonb support
  • Maps inet and cidr PostgreSQL types to netip.Addr and netip.Prefix
  • Large object support
  • NULL mapping to pointer to pointer
  • Supports database/sql.Scanner and database/sql/driver.Valuer interfaces for custom types
  • Notice response handling
  • Simulated nested transactions with savepoints

Choosing Between the pgx and database/sql Interfaces

The pgx interface is faster. Many PostgreSQL specific features such as LISTEN / NOTIFY and COPY are not available through the database/sql interface.

The pgx interface is recommended when:

  1. The application only targets PostgreSQL.
  2. No other libraries that require database/sql are in use.

It is also possible to use the database/sql interface and convert a connection to the lower-level pgx interface as needed.

Testing

See CONTRIBUTING.md for setup instructions.

Architecture

See the presentation at Golang Estonia, PGX Top to Bottom for a description of pgx architecture.

Supported Go and PostgreSQL Versions

pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For Go that is the two most recent major releases and for PostgreSQL the major releases in the last 5 years. This means pgx supports Go 1.20 and higher and PostgreSQL 12 and higher. pgx also is tested against the latest version of CockroachDB.

Version Policy

pgx follows semantic versioning for the documented public API on stable releases. v5 is the latest stable major version.

PGX Family Libraries

pglogrepl provides functionality to act as a client for PostgreSQL logical replication.

pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).

tern is a stand-alone SQL migration system.

pgerrcode contains constants for the PostgreSQL error codes.

Adapters for 3rd Party Types

Adapters for 3rd Party Tracers

Adapters for 3rd Party Loggers

These adapters can be used with the tracelog package.

3rd Party Libraries with PGX Support

pgxmock is a mock library implementing pgx interfaces. pgxmock has one and only purpose - to simulate pgx behavior in tests, without needing a real database connection.

Library for scanning data from a database into Go structs and more.

A carefully designed SQL client for making using SQL easier, more productive, and less error-prone on Golang.

Adds GSSAPI / Kerberos authentication support.

Explicit data mapping and scanning library for Go structs and slices.

Type safe and flexible package for scanning database data into Go types. Supports, structs, maps, slices and custom mapping functions.

Code first migration library for native pgx (no database/sql abstraction).

pgx's People

Contributors

aleksi avatar bakape avatar blakeembrey avatar codercms avatar davidarchibald avatar drakkan avatar duohedron avatar evanj avatar felixge avatar flowchartsman avatar georgysavva avatar gitstashpop avatar horgh avatar jackc avatar jameshartig avatar jimtsao avatar karlseguin avatar kelseyfrancis avatar lzakharov avatar manniwood avatar michaeldarr avatar mwchambers avatar redbaron avatar regeda avatar robfordww avatar sean- avatar titanous avatar wgh- avatar x4m avatar yungtrizzle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pgx's Issues

SQL injection protection

Query methods should safely accept parameters.

Possible solutions include escaping client side and using prepared statements.

Distinguish sql errors and connection errors

First of, thanks for your efforts on pgx!

I'd like to build some retry logic around the connectionpool so queries will be automaticly retried a couple of times within a certain time frame. Obviously this should only be done when there is an actual connection problem(for instance postgres is down) and not an error with the actual query execution (for instance a constraint error). Is there a good way to distinguish between these types of errors in pgx?

What are your thoughts on adding a (sub)package to support this retry logic? I'd be more than happy to contribute but would like to hear your thoughts on it first.

How would convert numeric(8,2) to Golang?

I'm relatively new to Golang and need to use a currency column (numeric(8,2)) in my PostGres 9.4 database. What is common pgx practice for this?

Also, I want to later use the new Postgres jsonb datatype (converted to a string in Go is ok). I would appreciate guidance with that too.

Making logging injectable

Current implementation has a dependency on a non-standard logging framework.
For my project this causes challenges because I use a modified glog version that allows the service to send logging to our logstash platform.

An alternative I like to suggest is to introduce a pgx.Logger interface such as

type Logger interface {
    Info(ctx ...interface{})
    Warn(ctx ...interface{})
    Debug(ctx ...interface{})
    Error(ctx ...interface{})
}

and a pgx.StandardLogger type that implements these functions by delegating them to inconshreveable (or perhaps the standard Go logger)

When the pgx.ConnConfig field "Logger" uses the pgx.Logger interface as its type, then your package users can inject their own implementation. For my project, I would write one that delegates to glog.

Extended query protocol

This enables several further possibilities.

  • Directly call prepared statements - use true parameter binding instead of client side escaping
  • Binary results - potentially quicker for non-string data

Improve Scan error message

In query.go, function Scan, it has the statement

err = errors.New("Scan received wrong number of arguments")

It can be improved when it also states how many it got and expected, such as

"Scan received wrong number of arguments, got 5 but expected 4"

SelectValue(s) should return proper type

Instead of SelectString, SelectIn16, SelectInt32, etc. have a SelectValue that returns interface{}. The actual value should be correctly typed by using the PostgreSQL type system.

Also, SelectAll* should be changed the same way (probably rename to SelectValues).

Problem with a QueryArgs interface

Hi!

Excuse me for a (probably) stupid question, but I'm with go not so long time so may be I just don't know something in general...

I've got an issue with a QueryArgs. I can't use it as an argument for any Query\QueryRow\Exec: every time there is an error "Prepared statement "" requires X parameters, but Y were provided". As far as I can see in sendPreparedQuery it doesn't expect to have []interface inside of arguments slice:

if len(ps.ParameterOids) != len(arguments) {
        return fmt.Errorf("Prepared statement \"%v\" requires %d parameters, but %d were provided", ps.Name, len(ps.ParameterOids), len(arguments))
    }

If I missed something I will be very grateful for any advices.

Thanks!

example programs

I've created a small test program using what I could glean from url_shortener/main.go

Any chance of some more examples?

The times that I have seen thus far look encouraging.

If I can get some more examples, I'll likely load something on Github as an example.

I'm interested in transactions with insert, update, delete, among other things.

pgx.NullString adding braces and "true"

I have a nullable varchar(100) database column called "title" that renders fine with a Go type of "string" using {{.title }} in a standard HTML template via the github.com/unrolled/render package. But I need to account for null database values so I thought I should use pgx.NullString (with the native driver) as the model datatype. But then if the data is null, the HTML template renders the null value as "{ false }". If the data is not null (e.g. "my title") it renders "{my title true}" including the braces.

not seeing performance increase over sqlx

I created a new SQL builder library mgutz/dat and created a pgx runner for it. I hoped it would give a boost over sqlx + lib/pq but the benchmarks say they are nearly identical in performance with sqlx usually coming out ahead. The writeup for the benchmark is in the README

The most relevant part is https://github.com/mgutz/dat/blob/pgx/pgx/runner.go which is responsible
for executing queries and scanning results. It's nearly identical to the sqlx file of same name.

Here are the benchmarks for dat + sqlx

BenchmarkTransactedDat2    10000                     111526   ns/op  832   B/op  21  allocs/op
BenchmarkTransactedDat4    10000                     115365   ns/op  1232  B/op  26  allocs/op
BenchmarkTransactedDat8    10000                     145685   ns/op  1480  B/op  33  allocs/op
BenchmarkBuildExecSQLDat2  5000                      214377   ns/op  832   B/op  21  allocs/op
BenchmarkBuildExecSQLDat4  5000                      223585   ns/op  1232  B/op  26  allocs/op
BenchmarkBuildExecSQLDat8  5000                      251885   ns/op  1480  B/op  33  allocs/op
BenchmarkExecSQLDat2       5000                      208791   ns/op  280   B/op  10  allocs/op
BenchmarkExecSQLDat4       5000                      209814   ns/op  296   B/op  10  allocs/op

Here are the benchmarks for dat + pgx

BenchmarkTransactedDat2    10000                     113858   ns/op  1208  B/op  28  allocs/op
BenchmarkTransactedDat4    10000                     117697   ns/op  1608  B/op  33  allocs/op
BenchmarkTransactedDat8    10000                     148228   ns/op  1856  B/op  40  allocs/op
BenchmarkBuildExecSQLDat2  5000                      215328   ns/op  1208  B/op  28  allocs/op
BenchmarkBuildExecSQLDat4  5000                      225971   ns/op  1608  B/op  33  allocs/op
BenchmarkBuildExecSQLDat8  5000                      259791   ns/op  1856  B/op  40  allocs/op
BenchmarkExecSQLDat2       10000                     209085   ns/op  656   B/op  17  allocs/op
BenchmarkExecSQLDat4       5000                      212718   ns/op  672   B/op  17  allocs/op

Any suggestions on improving the pgx runner?

Documentation

  • Add samples to readme
  • Add godoc to public functions

Logging is not used correctly.

I'm getting messages like the following emerging from the framework:

INFO[09-09|13:13:09] Dialing PostgreSQL server at host: : =nil LOG15_ERROR="Normalized odd number of arguments by adding nil"

Is conn.go not conforming to log15?

Query failure with empty arrays

Get's a "read past end of value" which is being generated from ReadInt32(). I'm on 9.4...I'll install 9.3 in a bit and see if that's the issue (and let you know either way).

func TestFailure(t *testing.T) {
    conn := mustConnect(t, *defaultConnConfig)
    defer closeConn(t, conn)
    _, err := conn.Exec("drop table if exists x; create table x (id int, a text[], b text[])")
    if err != nil {
        t.Fatalf("conn.Exec failed: ", err)
    }

    _, err = conn.Exec("insert into x values (1, '{}', '{}')")
    if err != nil {
        t.Fatalf("conn.Exec failed: ", err)
    }

    var id int32
    var a, b []string
    err = conn.QueryRow("select id, a, b from x where id = 1").Scan(&id, &a, &b)
    if err != nil {
        t.Fatalf("conn.QueryRow failed: ", err)
    }
}

Misleading error message "is not a core type and it does not implement Encoder"

This error message prints the Go type of the parameter that was passed to the query, not the PostgreSQL type that the query is asking for.

For example, with db.QueryRow(select typname from pg_type where oid = $1, 705), it says int is not supported, but the problem is actually that the query is asking for an Oid, and there isn't a case for serializing an Oid.

Callback per result row

Instead of returning the entire result set as current query functions do, add a function that accepts a callback function that processes a single row at a time. This would substantially reduce memory consumption.

Connection pool

The connection pool should have explicit obtain and release semantics. This would guarantee multiple sequential calls would all go to the same connection.

panic on closing pool to already shutdown service

the following example illustrates this

package main

import (
    "log"
    "time"

    "github.com/jackc/pgx"
)

func main() {
    config := pgx.ConnConfig{
        Host:     "xxxxx",
        User:     "xxxx",
        Password: "xxxx",
        Database: "xxxx",
    }
    poolConfig := pgx.ConnPoolConfig{ConnConfig: config, MaxConnections: 20}

    pool, err := pgx.NewConnPool(poolConfig)
    if err != nil {
        log.Fatalf("Unable to establish connection pool:%v", err)
    }

    log.Println("wait 10 seconds so I can shutdown pg service")
    time.Sleep(10 * time.Second)

    pool.Close()
}

The stack will show

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x4b8 pc=0x45854]

goroutine 16 [running]:
runtime.panic(0x208120, 0x362084)
    /usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
github.com/jackc/pgx.(*Conn).Close(0x0, 0x0, 0x0)
    /Users/emicklei/Workspaces/go/src/github.com/jackc/pgx/conn.go:234 +0x44
github.com/jackc/pgx.(*ConnPool).Close(0xc208056000)
    /Users/emicklei/Workspaces/go/src/github.com/jackc/pgx/conn_pool.go:129 +0x68
main.main()

Maybe a simple nil check is sufficient

Structure binding

Need some easy to use way of reading data from a query directly into structs.

Possibilities:

  • reflection
  • generated binding classes
  • pass in function as query data reader

Security of SanitizeSql?

Hello,

thanks for what seems like a very nice library.

I'm not a security expert but is escaping the single quote enough to prevent injection? While reading about this on the net [1][2], it seems like this approach is tricky to get right.

Neither github.com/lib/pq or github.com/go-pg/pg seems to have gone this route and instead went for bound parameters. Maybe for a reason?

[1] http://security.stackexchange.com/questions/3611/sql-injection-but-why-isnt-escape-quotes-safe-anymore
[2] http://www.unixwiz.net/techtips/sql-injection.html#miti

Travis CI

Would you be open to running tests on Travis CI? If so, I'm happy to implement the config, you'll just need to toggle the hook on in the Travis settings.

TestQueryRowErrors wrong error returned on postgres 9.1 and 9.0

--- FAIL: TestQueryRowErrors (0.01s)
    query_test.go:478: 4. Expected error to contain Cannot encode int into oid 3926 - int must implement Encoder or be converted to a string, but got ERROR: type "int8range" does not exist (SQLSTATE 42704) (sql -> select $1::int8range, queryArgs -> [705])

Question: parameter limit

Does this have the same parameter limit that you encounter with pq?

pq: got xxxx parameters but PostgreSQL only supports 65535 parameters

Thanks for a great library btw. :)

Better error info

I tried scanning an integer column value into an "int" type.
This failed with the row.Err() reporting "Unknown type".

After inspecting the source code, I realized I must use int32 or int64 instead.
My suggestion is to change the error message by including the name of type that was unknown.

Thank you for making this package.
^ Ernest

Prepared statements

Support creating and calling prepared statements. This should work with the connection pool to create the prepared statement on all connections so it would be available always.

Note that prepared statements could initially be implemented entirely through prepare and execute commands. It doesn't initially need to use the extended interface.

support db null value

when query return null value , the rows.scan got error.
add support sql.NullString

func decodeNullText(vr *ValueReader) sql.NullString {
if vr.Len() < 1 {
return sql.NullString{String: "", Valid: false}
}
return sql.NullString{String: vr.ReadString(vr.Len()), Valid: true}
}


case *sql.NullString:
*d = decodeNullText(vr)


and other nullable type in sql package .

thanks.

Inserting DEFAULT value

Is there an option to mention DEFAULT keyword while inserting records. For Example,

row := connPool.QueryRow("INSERT INTO tabl1(id, name, refno) VALUES($1, $2, $3) RETURNING id",  "DEFAULT", name, refno)

Above does not work for as postgres is expecting DEFAULT keyword without quotes

Select slice of multiple values

Returns a slice of the appropriate type. Possibly a better name can be found.

  • SelectStringSlice
  • SelectInt32Slice
  • SelectInt64Slice
  • etc.

Inserting custom type

I have the following custom type:

type Status int64

const (
   STATUS_ONE Status = 100
   STATUS_TWO        = 200
)

When i'm using it to run insert, like:

sql := `INSERT INTO todo (name, status) VALUES($1, $2)`
params := []interface{}{"Watch movie", Status(100)}
row := conn.QueryRow(sql, params...)

its giving me the following error:

2015/03/16 17:55:28 panic in processor: Expected integer representable in int64, received Status 100:

My todo table's status column is int8 as well but why can't I insert? I tried adding Value() (driver.Value, error) method to satisfy valuer interface but its still the same.

Any idea?

Logging

Connection should accept a logger.

This logger should be able to log (depending on the log level)

  • Queries
  • Low level PostgreSQL messages

Transactions

Should be able to use any isolation level of transaction PostgreSQL supports.

Select single values

Functions that select a single value.

  • SelectString
  • SelectInt32
  • SelectInt64
  • etc.

Question: Listen/Notify demo

Could you please add a simple listen notify demo? It looks like your notify function times out and you have to keep polling the database after each timeout... I'm not sure how that would work.

Execute / Run

Function to execute arbitrary SQL that doesn't return a set like Query does. Execute would seem to be an ideal name, but the execute key word in PostgreSQL refers to prepared statements, so I may want to use a different name to reserve Execute for prepared statements.

Problem with test

I'm not very experienced with Go at this stage, and running tests, however running "go test" using Win8 and latest Go version (go1.1.2 windows/amd64), I get :

github.com/JackC/pgx_test

c:\users\boh\golang\mygo\pkg\src\github.com\JackC\pgx\bench_test.go:605: undefin
ed: defaultConnectionParameters
c:\users\boh\golang\mygo\pkg\src\github.com\JackC\pgx\connection_pool_test.go:13
: undefined: defaultConnectionParameters
c:\users\boh\golang\mygo\pkg\src\github.com\JackC\pgx\connection_pool_test.go:28
: undefined: defaultConnectionParameters
etc. ...................

Implement optional map[string]string hstore handler.

It would be nice to be able to avoid a lot of the boilerplate Scan code and simply do something like this:

type SomeType map[string]string
func (s SomeType) Scan(vr *pgx.ValueReader) (err error) {
    //can't do this until hstore Oid is standardized
    //if vr.Type().DataType != pointOid {
    //}
    s, err = pgx.hstoreMapHandler(vr.ReadString(vr.Len()))
    return
}

And just end up with a populated map.

I understand the utility of writing your own (t Type) Scan function, and I still do that in a lot of cases, but I think the hstore -> map[string]string is a common enough use case that it merits a useful handler.

Also, currently I can

SELECT hstore_to_array(hstorecol) FROM TABLE;

And just use a []string, but I still think a generic handler would be a nice addition for those that understand the cost.

One shot queries against connection pool

Consider one shot commands issued directly to the connection pool. The connection pool would handle acquiring and releasing the connection. This could be especially interesting with some kind of futures support. You could kick off multiple parallel queries without needing to manage the connections by hand.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.