Coder Social home page Coder Social logo

gocqlx's Introduction

Scylla

Slack Twitter

What is Scylla?

Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB. Scylla embraces a shared-nothing approach that increases throughput and storage capacity to realize order-of-magnitude performance improvements and reduce hardware costs.

For more information, please see the ScyllaDB web site.

Build Prerequisites

Scylla is fairly fussy about its build environment, requiring very recent versions of the C++20 compiler and of many libraries to build. The document HACKING.md includes detailed information on building and developing Scylla, but to get Scylla building quickly on (almost) any build machine, Scylla offers a frozen toolchain, This is a pre-configured Docker image which includes recent versions of all the required compilers, libraries and build tools. Using the frozen toolchain allows you to avoid changing anything in your build machine to meet Scylla's requirements - you just need to meet the frozen toolchain's prerequisites (mostly, Docker or Podman being available).

Building Scylla

Building Scylla with the frozen toolchain dbuild is as easy as:

$ git submodule update --init --force --recursive
$ ./tools/toolchain/dbuild ./configure.py
$ ./tools/toolchain/dbuild ninja build/release/scylla

For further information, please see:

Running Scylla

To start Scylla server, run:

$ ./tools/toolchain/dbuild ./build/release/scylla --workdir tmp --smp 1 --developer-mode 1

This will start a Scylla node with one CPU core allocated to it and data files stored in the tmp directory. The --developer-mode is needed to disable the various checks Scylla performs at startup to ensure the machine is configured for maximum performance (not relevant on development workstations). Please note that you need to run Scylla with dbuild if you built it with the frozen toolchain.

For more run options, run:

$ ./tools/toolchain/dbuild ./build/release/scylla --help

Testing

See test.py manual.

Scylla APIs and compatibility

By default, Scylla is compatible with Apache Cassandra and its APIs - CQL and Thrift. There is also support for the API of Amazon DynamoDB™, which needs to be enabled and configured in order to be used. For more information on how to enable the DynamoDB™ API in Scylla, and the current compatibility of this feature as well as Scylla-specific extensions, see Alternator and Getting started with Alternator.

Documentation

Documentation can be found here. Seastar documentation can be found here. User documentation can be found here.

Training

Training material and online courses can be found at Scylla University. The courses are free, self-paced and include hands-on examples. They cover a variety of topics including Scylla data modeling, administration, architecture, basic NoSQL concepts, using drivers for application development, Scylla setup, failover, compactions, multi-datacenters and how Scylla integrates with third-party applications.

Contributing to Scylla

If you want to report a bug or submit a pull request or a patch, please read the contribution guidelines.

If you are a developer working on Scylla, please read the developer guidelines.

Contact

  • The community forum and Slack channel are for users to discuss configuration, management, and operations of the ScyllaDB open source.
  • The developers mailing list is for developers and people interested in following the development of ScyllaDB to discuss technical topics.

gocqlx's People

Contributors

alfa-alex avatar annismckenzie avatar bilias23 avatar bobochka avatar dahankzter avatar drahflow avatar iwittkau avatar izenhaim avatar jgiles avatar kevinbarbour avatar kfrico avatar martin-sucha avatar maxkarelov avatar meox avatar mmatczuk avatar mrwormhole avatar n1cos avatar nussjustin avatar pavle995 avatar pierdipi avatar quenbyako avatar rinx avatar sagarafr avatar spiritofwill avatar spolischook avatar tehsphinx avatar vishr avatar vrischmann avatar yeminli avatar zimnx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gocqlx's Issues

Add automatic support for UDT

Working with UDTs is kind off a pain in gocql. We make it easier with mapper and so on but in general it should be possible to mark struct as UDT and let gocqlx handle that efficiently.

From user perspective enabling UDT should look like

type A struct {
	gocqlx.UDT

	B int
	C string `db:"c2"`
	D MyInt
}

Quick sketch for this:

package gocqlx

import (
	"reflect"

	"github.com/gocql/gocql"
)

type UDT struct {
	Ifce map[string]interface{}
}

func (u *UDT) AutoIfce(v interface{}) {
	// Ignore if already set
	if u.Ifce != nil {
		return
	}

	m := DefaultMapper.FieldMap(reflect.ValueOf(v))

	u.Ifce = make(map[string]interface{}, len(m))
	for name, r := range m {
		u.Ifce[name] = r.Interface()
	}
}

func (u UDT) MarshalUDT(name string, info gocql.TypeInfo) ([]byte, error) {
	return gocql.Marshal(info, u.Ifce[name])
}

func (u UDT) UnmarshalUDT(name string, info gocql.TypeInfo, data []byte) error {
	return gocql.Unmarshal(info, data, u.Ifce[name])
}

Then it can be hooked to Iterx scanAny and scanAll functions to call AutoIfce if scannable to init the Ifce map. Note that for scanAll this should be only called once (to avoid repeated reflection calls) and we should use reflect Set (or unsafe pointer directly) to just copy value in the for loop before appending.

Pure go data migrations require empty cql file

CQL files are needed to create migrations. In situations where go migration code is needed to create data only migrations, CQL file becomes superficial.

Workaround for this is to use empty file and attach go code execution as callback. But empty CQL file is not accepted so workaround for this is to use some read only statement in the CQL file to make sure go callback is executed.

This all feel kinda hackish and should probably be given consideration in the future.

UDTs with gocqlx

Hi,
I'm trying to use UDTs with gocqlx on Cassandra 3.11.0 but I'm having problems.
So, I do the following:

package main

import (
	"github.com/davecgh/go-spew/spew"
	"github.com/gocql/gocql"
	"github.com/scylladb/gocqlx"
	"github.com/scylladb/gocqlx/qb"
	"log"
)

type P struct {
	R  string   `json:"r"`
	Cs []string `json:"cs"`
}

type Css struct {
	T  string `json:"t"`
	Ps []P    `json:"ps"`
}

/*
Remember to run `CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 1};` before
*/
func main() {
	// Setup cluster
	cluster := gocql.NewCluster("cassandra")
	cluster.Keyspace = "test"
	session, err := cluster.CreateSession()

	// Create table in keyspace test
	err = session.Query(`
		CREATE TYPE test.p (
			r text,
			cs list<text>
		);`).Exec()
	if err != nil {
		log.Fatal(err)
	}
	err = session.Query(`
		CREATE TABLE test.css (
			t text PRIMARY KEY,
			ps list<FROZEN <p>>
		);`).Exec()
	if err != nil {
		log.Fatal(err)
	}

	// Write to table
	is := Css{
		T: "TestT",
		Ps: []P{
			P{R: "TestR", Cs: []string{"T1", "T2"}},
		},
	}
	fields := []string{"t", "ps"}
	stmtw, namesw := qb.Insert("css").Columns(fields...).ToCql()
	qw := gocqlx.Query(session.Query(stmtw), namesw).BindStruct(is)
	err = qw.ExecRelease()
	if err != nil {
		log.Fatal(err)
	}

	// Read from table
	stmtr, namesr := qb.Select("css").Where(qb.Eq("t")).ToCql()

	qr := gocqlx.Query(session.Query(stmtr), namesr).BindMap(qb.M{"t": "TestT"})

	var item Css
	err = gocqlx.Get(&item, qr.Query)
	if err != nil {
		log.Fatal(err)
	}
	spew.Dump(item)
}

I was expecting:

(main.Css) {
 T: (string) (len=5) "TestT",
 Ps: ([]main.P) (len=1 cap=1) {
  (main.P) {
   R: (string) (len=5) "TestR",
   Cs: ([]string) (len=2 cap=2) {
    (string) (len=2) "T1",
    (string) (len=2) "T2"
   }
  }
 }
}

Instead I got:

(main.Css) {
 T: (string) (len=5) "TestT",
 Ps: ([]main.P) (len=1 cap=1) {
  (main.P) {
   R: (string) "",
   Cs: ([]string) <nil>
  }
 }
}

Add a code generator that would extract tables and columns from a database

Flags

We need pretty much all the flags from https://github.com/volatiletech/sqlboiler#initial-generation, schema should be replaced with keyspace and models should be written to models/keyspace.

Output

For every table in a given keyspace generate Table metatata.

// TableName is generated metadata for <keyspace>.<table_name>.
struct TableName {
    Keyspace string
    Name string
    PK string[]
    Columns string[]
    ColumnName string
    ...
}

We shall follow the golang naming conventions and snake_case table / column names should be converted to camel case.

how to imporve the performance

We have try the benchmark_test.go on gcp. but the result has significant difference
go test -v -bench=. -run=none ./benchmark_test.go --cluster=x.x.x.x:9042,x.x.x.x:9042
goos: linux
goarch: amd64
BenchmarkE2EGocqlInsert-16 2000 960651 ns/op
BenchmarkE2EGocqlxInsert-16 2000 947839 ns/op
BenchmarkE2EGocqlGet-16 2000 919189 ns/op
BenchmarkE2EGocqlxGet-16 2000 1088892 ns/op
BenchmarkE2EGocqlSelect-16 100 12174738 ns/op
BenchmarkE2EGocqlxSelect-16 100 11897347 ns/op
PASS
ok command-line-arguments 24.307s

the values of ns/op are ten times than your benchmark result. how to improve it ?

ExecRelease() has no "not found" error when it used with SELECT statement

Hello. Sorry for bad English.

I think I found a bug related to ExecRelease() method:

When I'm trying to execute a SELECT statement with ExecRelease() I haven't received expected "not found" error even if I query empty table. If I change ExecRelease() to GetRelease(&dest) - then I got this error, but in this case, I must allocate and use an unneeded variable.

ExecRelease together with SELECT query needed at least for check if record exists without creating an extra variable.

example of broken code:

// this function checks record existance in table
// table_name represents empty table 
func IsExists(NonExistentID string) (bool, error) {

	stmt, names := qb.Select("table_name").Where(qb.Eq("id")).ToCql()
	q := gocqlx.Query(session.Query(stmt), names).BindMap(qb.M{
		"id":     NonExistentID,
	})

	err := q.ExecRelease() //expected to be "not found" error

	if err == nil {
		return true, nil
	}

	if err.Error() == "not found"  {
		return false, nil
	}

	return false, err

}

Tuple notation in the where clause

Is that possible to create a query like below with the qb? It seems like there is no filtering comparator for the tuple notation.

SELECT * FROM posts
 WHERE userid = 'john doe'
   AND (blog_title, posted_at) > ('John''s Blog', '2012-01-01')

Misleading benchmark results for insert

BenchmarkE2EGocqlInsert in benchmarck_tests.go is written in a way which lead to believe that gocqlx will double the performance of insert compared to gocql, while the test is actually inserting twice as much as BenchmarkE2EGocqlxInsert.

func BenchmarkE2EGocqlInsert(b *testing.B) {
        // redacted
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		// prepare <- also insert, because of exec.
		p := people[i%len(people)]
		if err := q.Bind(p.ID, p.FirstName, p.LastName, p.Email, p.Gender, p.IPAddress).Exec(); err != nil {
			b.Fatal(err)
		}
		// insert <- insert a second time
		if err := q.Exec(); err != nil {
			b.Fatal(err)
		}
	}
}

Extend Queryx with Get and Select

For inserts and updates we have ExecRelease function available directly in Queryx, we could do similar for select by introducing:

  • Queryx::Get(dest interface{})
  • Queryx::GetRelease(dest interface{})
  • Queryx::Select(dest interface{})
  • Queryx::SelectRelease(dest interface{})

So instead of

		q := gocqlx.Query(session.Query(stmt), names).BindMap(qb.M{
			"first_name": "Patricia",
		})
		defer q.Release()

		if q.Err() != nil {
			return nil, q.Err()
		}

		var p Person
		if err := gocqlx.Get(&p, q.Query); err != nil {
			t.Fatal(err)
		}

we could have:

		q := gocqlx.Query(session.Query(stmt), names).BindMap(qb.M{
			"first_name": "Patricia",
		})
		var p Person
		if err := q.GetRelease(&p); err != nil {
			t.Fatal(err)
		}

Support for CAS Operation Insight (LWT)

https://docs.datastax.com/en/cql/3.3/cql/cql_using/useInsertLWT.html

I'm using the gocqlx library and performing a conditional update with the query builder. I'm not seeing an obvious way to determine whether a query was applied vs experiencing an error.

In plain gocql library, functions like https://godoc.org/github.com/gocql/gocql#Query.MapScanCAS return an applied bool to check whether the update was successful.

I don't see any similar return for Exec() or ExecRelease() using gocqlx. https://godoc.org/github.com/scylladb/gocqlx#Queryx.Exec Do these functions return an error when the update fails to apply based on the IF conditional?

Here's how I'm using this today: https://play.golang.org/p/WfnIAOzwZtC

But, I've had instances where it seems the update has failed to apply, but returned cleanly and allowed the rest of my code to execute. I'm using this in a job locking mechanism and I've had the same job run twice on different workers.

SELECT queries are not delivered to the replica and shard to which token belongs to

HEAD: ?
scylla version: 2018.1.6

Cluster: 3 DC, 7 nodes in each DC.
Each node has 38 shards
RF: One replica in each DC.
Client load balancing configuration: TokenAwareHostPolicy with failover to DCAwareRoundRobinPolicy

Description
Customer runs a Go program that issues singular read requests however some shards are being loaded more than others a Coordinator while the load is nicely balanced on a replica side.

Below is the screenshot of a "Per Server" dashboard in a per-shard mode on one of the nodes:
image

Yellow shard is shard 1.

We recorded CQL Tracing during the same time frame using the probabilistic tracing enabled on the same node.

Here is a raw tracing data:
events.txt
sessions.txt

First of all analyzing this data we see that there are 189 SELECTs out of 517 total requests.
Out of these 189 50 were handled by shard 1 as a coordinator.
And all selects handled by shard 1 were selecting a distinctive token.

Here are some filterings of the tracing data provided above:
selects_events.txt
selects_session_ids.txt
selects_shard1_events.txt
selects_shard1_session_ids.txt
selects_shard1_sessions.txt
selects_shard1_tokens.txt
selects_shard1_targets_different_from_coordinator.txt

As you may see from the last filtering there are 46 selects which natural endpoints are different from the coordinator (10.127.248.7).

Here are trace events of one such SELECT query:
913b6c10-23ee-11e9-a140-000000000025.txt

Queryx GetRelease() with interface{}

I've written a database package for interfacing with Cassandra, and I'm attempting to replace gocql with gocqlx inside that package.

My package has a Table type that has various methods on the Table, like SelectOne, SelectAll, InsertOne, etc.

In a leaf package, I'm calling my library like this:

uuid gocql.UUID
f MyCustomStruct
// SelectOnex func(value interface{}, dest interface{}) error
	err := MyTable.SelectOnex(uuid, f)
	if err != nil {
		return err
	}

Inside my database package this function looks like this:

func (t Table) SelectOnex(value interface{}, dest interface{}) error {

	// build our statement
	stmt, names := qb.Select(t.FullName()).Where(qb.Eq(t.PrimaryKey)).ToCql()

	// build our query object
	q := gocqlx.Query(Session.Query(stmt), names).BindMap(qb.M{t.PrimaryKey: value})

	// execute the query
	if err := q.GetRelease(dest); err != nil {
		return err
	}

	return nil
}

Unfortunately, it seems that the q.GetRelease(dest) doesn't like the interface{} type coming from my wrapped function. scannable dest type interface with >1 columns (7) in result`

Is there a way to pass my MyCustomStruct type through so that it is properly decoded into?

Replace gocql Iter Scan with scanning in gocqlx

Gocql scan (and bind) work with interface{} slices and use huge type and kind (reflect) switches, this affects performance as shown by @martin-sucha with https://github.com/kiwicom/easycql. Gocql uses unmarshal functions with the following signature unmarshalX(info TypeInfo, data []byte, value interface{}) error

We are in a very good position to optimise that in Iterx without any code generation.
Since we are unmarshaling the same structs over and over again in a loop (Select or even ScanStruct with the same struct passed as a parameter) we can figure out exactly what unmarshalling logic is needed once for any number of invocations.

The main idea here is to create a helper type that for a given struct ptr and columns spec would provide a mapping from column name to unmarshalling function func(data []byte) error.

This idea can be expanded to UDTs as well. In #120 I suggested introduction of name to value (interface{}) mapping but it can be changed to name to function mapping exactly the same way.

batch insert questions

I write the Batch Function for testing batch.
However, the values are not expected. I expect the value [0 1 2 1 2 3], but [1 2 3 1 2 3]
image
image

qb feature: Add a 'Clone' method to all query builder types

Because query builders are mutable, they can be tricky to re-use safely.

It would be useful to be able to define some common 'base' query builder, and re-use it multiple times to build derivative queries.

You can sort of accomplish this now by defining a function to assemble the base builder and calling it repeatedly, but it would be more natural+convenient (and probably performant) to be able to keep a base builder in a variable and clone it for use in constructing variants.

Malformated query when there are only GroupBy

For example, if we make a request with only a GroupBy statement

package main

import (
	"github.com/sagarafr/gocqlx/qb"
	"fmt"
)

func main() {
	s, _ := qb.Select("cycling.cyclist_name").GroupBy("id").ToCql()
	fmt.Println(s)
}

we have this generation :

SELECT id, FROM cycling.cyclist_name GROUP BY id

and this error, during the execution, because of the comma near FROM :

no viable alternative at input 'FROM'

Cannot use same column for relational queries in select

Since the Select in qb utilizes map[string]interface{}, I cannot give in between queries.

Example, I have this query, date >= '2019-07-30' and date <= '2019-08-15'. Binding it to qb.Cond and q.Map gives {"date": "2019-07-30", "date":"2019-08-15"} which is invalid in for map[string]interface{}.

How would you bind this?

Get/Select SHOULD NOT release queries

Currently, the Get and Select iterx methods call iter.ReleaseQuery(). This is a highly dangerous "convenience" which can lead to data races under common usage patterns.

The Iter constructor and Get and Select functions accept a query as a parameter, and so from an API design perspective it make most sense to let clients (who created the query) manage the query lifecycle.

E.g. it's a pretty common pattern to do

q := createQuery()
defer q.Release()
// do stuff with query, err returns, etc
return result, nil

But if "doing stuff" includes a gocqlx.Select, this pattern will lead to Release being called twice. That means the query object now has two entries in the query pool, which will distribute it to two separate goroutines, and racy concurrent read/writes ensue. We had a very hard-to-track-down bug in our service caused by this.

In this case, the cost of not calling Release is minimal; the query will just be GC'd instead of recycled. It's safer and better to let clients do releasing themselves.

UPDATE like `fileld` + var

Hello.

Is it possible to do updates like using gocqlx?

UPDATE table SET field = field + "some string"....

how to marshal/unmarshal custom types

Is there any interface out there to implement it for our types in order to achieve custom bindings ?

//"CREATE TABLE testKS.Foos (id uuid PRIMARY KEY, user_id text, bars map<text,int>)"
var tableName = "testKS.Foos"

type Foo struct {
	ID        gocql.UUID `db:"id"`
	UserID    string     `db:"user_id"`
	Bars []Bar      `db:"bars"`
}

type Bar struct {
	Key   string `db:"key"`
	Value int    `db:"value"`
}

func FetchFoo(session *gocql.Session) {
	qry, _ := qb.Select(tableName).ToCql()
	var items []*Foo
	err := gocqlx.Select(&items, session.Query(qry))
	if err != nil {
		log.Fatal(err) // can not marshal []Bar into map(varchar, int)
	}
	return items
}

func InsertFoo(session *gocql.Session, foo Foo) {
	qry, names := qb.Insert(tableName).
		Columns("id", "user_id", "bars").
		ToCql()
	err := gocqlx.Query(session.Query(qry), names).BindStruct(&foo).ExecRelease()
	if err != nil {
		log.Fatal(err) // can not unmarshal map(varchar, int) into *[]Bar
	}
}

cql where expression

hello,

is there any way to use qb to generate expr ?

example:

	SELECT * FROM users WHERE expr(users_index, '{
		filter: {
		   type: "boolean",
		   must: [
			  {type: "wildcard", field: "name", value: "*a"}
		   ]
		}
	 }')

tokenbuilder for token(column) <= ?

Hi,

I'd like to create the queries used in https://www.scylladb.com/2017/03/28/parallel-efficient-full-table-scan-scylla/ (source code: https://github.com/scylladb/scylla-code-samples/tree/master/efficient_full_table_scan_example_code) via the gocqlx query builder.

As far as I understand, neither the TokenBuilder nor the Cmp type supports creating a query like
SELECT ... WHERE token(column) <= ? and later on binding an int64 token value to the placeholder.

Did I miss something? if not, would you be interested in a PR?

qb feature request: Literal value specification in "set", "where" and "if"

In cases where queries are setting or comparing against some constant value (e.g. NULL, 0, "", a particular value of an enum-style text field), it would improve usability and readability to specify that constant value as a literal during query construction rather than making it a query parameter and then binding it later.

A natural API might be to add "Lit" variants in the same way there are currently "Named" variants:
qb.EqLit(column string, value string)
qb.GtLit(column string, value string)
qb.InLit(column string, valueSet string...)
...
and probably a .SetLit(column string, value string)

I've made the inputs strings here to avoid the question of value marshaling - if the input really is constant or constant-like, writing out the string should not be too burdensome for clients.

Or, if that adds unacceptable bloat, there could be some
qb.CmpLit(expression string)
where the caller is responsible for including the operator. That reduces API size and may be the most flexible, but it opens a fairly inviting hole for misuse in an otherwise pretty prescriptive API.

I'm happy to send a PR implementing this (and almost just sent one), but figured it would be good to discuss the desired API first.

Support nested structs

What is the best way to a property defined on the root level of a cassandra table be embedded inside a nested struct as defined in go?

For example

type Foo struct {
	CompanyUUID      gocql.UUID `json:"companyUUID" cql:"company_uuid"`
	Bar          *Bar   `json:"bar"`
	CreatedAt        time.Time  `cql:"created_at"`
	UpdatedAt        time.Time  `cql:"updated_at"`
}

type Bar struct {
	BazBah    bool     `json:"bazBah" cql:"baz_bah"`
}

CREATE TABLE IF NOT EXISTS {{.Keyspace}}.foo (
    company_uuid uuid,
    baz_bah boolean,
    created_at timestamp,
    updated_at timestamp,
    PRIMARY KEY (company_uuid)
);

A GetRelease on a Foo struct here fails with missing destination name "baz_bah" in *package.Foo

Supporting Counters

I could not find in the tests or documentations how to update a counter.
Is there support for this yet?

In the meantime, I can write queries using the underlying gocql.

If this feature doesn't exist, I would be interested in adding it. Counter statements can get quite long written by hand and leveraging the struct binding would be useful.

ToCql() doesn't quote reserved names

cql

CREATE TABLE IF NOT EXISTS "asgard"."session" (
  "workspace"         ASCII,
  "partition"         INT,
  "token"             BIGINT,
  "date"              TIMESTAMP,

  PRIMARY KEY (("workspace", "partition"), "token")
) WITH CLUSTERING ORDER BY ("token" DESC);

gocqlx

type session struct {
	Workspace   string     `cql:"workspace"`
	Partition   int32      `cql:"partition"`
	Token       int64      `cql:"token"`
	Date        time.Time  `cql:"date"`
}

obj := session{
	Workspace: "asgard",
	Partition: 5633,
	Token:     3156394525148,
	Date:      time.Now(),
}

stmt, names :=
	qb.Insert(`"asgard"."session"`).
		Columns("workspace", "partition", "token","date").
		ToCql()

q := gocqlx.Query(db.client.Query(stmt), names).
	BindStruct(obj)

executing the program will got the error message

line 1:52 no viable alternative at input 'token' (... "asgard"."session" (workspace,partition,[token]...)

I print the "stmt" got this

INSERT INTO "asgard"."session" (workspace,partition,token,date) ....

It should be

INSERT INTO "asgard"."session" (workspace,partition,"token",date) ....

No Not Equal (!=) Comparators

While implementing some SELECT queries that required a not equals comparator, I noticed that qb.Cmp lacks any Ne* funcs such as:

  • Ne(column string)
  • NeFunc(column string, fn *Func)
  • NeLit(column, literal string) e.g. column!=literal
  • NeNamed(column, name string) e.g. column!=?

Usecase: Using a query where you're utilizing a GtOrEqLit comparator however want to ensure you're excluding whatever current row / current item you're referencing, whether you're wanting to make sure you're not returning duplicate articles, messages, etc. posted at or after a certain timestamp micro. Instead you need to do this checking after you get your results.

I believe a Ne operator would be in spec with https://docs.scylladb.com/getting-started/dml/#select-statement per the operators list:

operator         ::=  '=' | '<' | '>' | '<=' | '>=' | '!=' | IN | CONTAINS | CONTAINS KEY

Implement TTL() for UpdateBuilder

Add func (b *InsertBuilder) LitTTL(ttl uint32) *InsertBuilder {} method.

According to https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlCreateTable.html#tabProp__cqlTableDefaultTTL, the maximum value for TTL 630720000 (20 years), so uint32 will be enough.

Because now, for setting TTL we need to do a lot of code for converting structs into the map, setting hardcoded _ttl value and use BindMap() instead of BindStruct():

        ...

	values["_ttl"] = someVar
	stmt, names := qb.Insert(GetTableName()).Columns(fields...).TTL().ToCql()
	query := gocqlx.Query(GetQuery(), names).BindMap(values)

        ...

and we don't want to update our struct with TTL field.

How to do pagination?

How to achieve pagination of results. As of now, my workaround is to fetch all by limit, but manually manage the offset and pagination of response in Go controller.

What is the proper way of handling this in gocqlx query?

Await schema agreement post migration

This requires a patch to gocql. Gocql has a function awaitSchemaAgreement to wait for schema to be propagated to all the nodes. We should export that to Session. When done we should call it post each migration.

how to imporve the performance

We try the benchmark_test.go on gcp. but the result has significant difference
go test -v -bench=. -run=none ./benchmark_test.go --cluster=x.x.x.x:9042,x.x.x.x:9042
goos: linux
goarch: amd64
BenchmarkE2EGocqlInsert-16 2000 960651 ns/op
BenchmarkE2EGocqlxInsert-16 2000 947839 ns/op
BenchmarkE2EGocqlGet-16 2000 919189 ns/op
BenchmarkE2EGocqlxGet-16 2000 1088892 ns/op
BenchmarkE2EGocqlSelect-16 100 12174738 ns/op
BenchmarkE2EGocqlxSelect-16 100 11897347 ns/op
PASS
ok command-line-arguments 24.307s

Support for collections with frozen types.

CREATE TYPE address(id text, street text, landmark text);
CREATE TABLE person(id text, name text, address list<frozen<address>>);

type Person struct {
       Id string `db:"id"`
       Name string `db:"name"`,
       Address []Address `db:"address"`
}
type Address struct {
     Id string `db:"id"`,
     Street string `db:"street"`,
     Landmark string `db:"landmark"`
}

Insert fails for Address using BindStruct while BindMap is successful if the struct is converted to map[string]interface up to all nested levels.

Add go.mod file

To make it useable with vgo as given in golang/go#24301
This is a relatively contained project that has the potential to work nicely without much work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.