Coder Social home page Coder Social logo

com-lihaoyi / scalasql Goto Github PK

View Code? Open in Web Editor NEW
174.0 15.0 15.0 1.88 MB

Scala ORM to query SQL databases from Scala via concise, type-safe, and familiar case classes and collection operations. Connects to Postgres, MySql, H2, and Sqlite out of the box

Scala 99.60% Shell 0.40%
database query-builder scala sql h2 mysql postgresql sqlite

scalasql's Introduction

ScalaSql

ScalaSql is a Scala ORM library that allows type-safe low-boilerplate querying of SQL databases, using "standard" Scala collections operations running against typed Table descriptions.

import scalasql._, SqliteDialect._

// Define your table model classes
case class City[T[_]](
    id: T[Int],
    name: T[String],
    countryCode: T[String],
    district: T[String],
    population: T[Long]
)
object City extends Table[City]

// Connect to your database (example uses in-memory sqlite, org.xerial:sqlite-jdbc:3.43.0.0)
val dataSource = new org.sqlite.SQLiteDataSource()
dataSource.setUrl(s"jdbc:sqlite:file.db")
lazy val dbClient = new scalasql.DbClient.DataSource(
  dataSource,
  config = new scalasql.Config {
    override def nameMapper(v: String) = v.toLowerCase() // Override default snake_case mapper
    override def logSql(sql: String, file: String, line: Int) = println(s"$file:$line $sql")
  }
)

dbClient.transaction{ db =>

  // Initialize database table schema and data
  db.updateRaw(os.read(os.Path("scalasql/test/resources/world-schema.sql", os.pwd)))
  db.updateRaw(os.read(os.Path("scalasql/test/resources/world-data.sql", os.pwd)))

  // Adding up population of all cities in China
  val citiesPop = db.run(City.select.filter(_.countryCode === "CHN").map(_.population).sum)
  // SELECT SUM(city0.population) AS res FROM city city0 WHERE city0.countrycode = ?

  println(citiesPop)
  // 175953614

  // Finding the 5-8th largest cities by population
  val fewLargestCities = db.run(
    City.select
        .sortBy(_.population).desc
        .drop(5).take(3)
        .map(c => (c.name, c.population))
  )
  // SELECT city0.name AS res__0, city0.population AS res__1
  // FROM city city0 ORDER BY res__1 DESC LIMIT ? OFFSET ?

  println(fewLargestCities)
  // Seq((Karachi, 9269265), (Istanbul, 8787958), (Ciudad de México, 8591309))
}

ScalaSql supports database connections to PostgreSQL, MySQL, Sqlite, and H2 databases. Support for additional databases can be easily added.

ScalaSql is a relatively new library, so please try it out, but be aware you may hit bugs or missing features! Please open Discussions for any questions, file Issues for any bugs you hit, or send Pull Requests if you are able to investigate and fix them!

Getting Started

To get started with ScalaSql, add it to your build.sc file as follows:

ivy"com.lihaoyi::scalasql:0.1.7"

ScalaSql supports Scala 2.13.x and >=3.4.2

Documentation

  • ScalaSql Quickstart Examples: self-contained files showing how to set up ScalaSql to connect your Scala code to a variety of supported databases and perform simple DDL and SELECT/INSERT/UPDATE/DELETE operations:

  • ScalaSql Tutorial: a structured walkthrough of how to use ScalaSql, connecting to a database and writing queries to SELECT/INSERT/UPDATE/DELETE against it to perform useful work. Ideal for newcomers to work through from top to bottom when getting started with the library.

  • ScalaSql Cheat Sheet: a compact summary of the main features of ScalaSql and the syntax to make use of them.

  • ScalaSql Reference: a detailed listing of ScalaSql functionality, comprehensively covering everything that ScalaSql supports, in a single easily searchable place. Ideal for looking up exactly methods/operators ScalaSql supports, looking up how ScalaSql code translates to SQL, or looking up SQL syntax to find out how to express it using ScalaSql. Useful subsections include:

  • ScalaSql Design: discusses the design of the ScalaSql library, why it is built the way it is, what tradeoffs it makes, and how it compares to other common Scala database query libraries. Ideal for contributors who want to understand the structure of the ScalaSql codebase, or for advanced users who may need to understand enough to extend ScalaSql with custom functionality.

  • Developer Docs: things you should read if you want to make changes to the com-lihaoyi/scalasql codebase

Changelog

0.1.7

  • Add support for columns of type java.util.Date #24

0.1.6

  • Add support for non-default database schemas in Postgres #23

0.1.5

  • Properly pass ON CONFLICT column names through columnNameMapper #19

0.1.4

  • Second attempt at fixing invalid version of scala-reflect dependency

0.1.3

  • Support for Scala 3.4.2 and #11

0.1.2

  • Support .getGeneratedKeys[R] #9

0.1.1

  • Fix invalid version of scala-reflect dependency

0.1.0

  • First release!

TODO

  • JSON columns
  • Add datetime functions
  • Make implicit ctx => for defining sql"..." snippets optional

scalasql's People

Contributors

aboisvert avatar haskiindahouse avatar jlvertol avatar lihaoyi avatar marcvk avatar mrdziuban avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scalasql's Issues

Huge memory increase in `renderSql` with scala 2.13.13 (OOM)

I wanted to try this library and immediately got an OutOfMemoryError for a trivial select statement of a single row on a single table. Reducing the fields count in the table definition allowed me to run the query, but it still took several seconds to generate the query, something was clearly wrong. Reverting to scala 2.13.12 (from 2.13.13) fixed the issue, apparently the last two commit of the ArrayBuilder broke something: https://github.com/scala/scala/commits/5aa3dc5ea77800ddd2a3bdf9e7feeea57484421c/src/library/scala/collection/mutable/ArrayBuilder.scala

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at java.base/java.lang.reflect.Array.newArray(Native Method)
        at java.base/java.lang.reflect.Array.newInstance(Array.java:78)
        at java.base/java.util.Arrays.copyOf(Arrays.java:3514)
        at java.base/java.util.Arrays.copyOf(Arrays.java:3482)
        at scala.collection.mutable.ArrayBuilder$ofRef.mkArray(ArrayBuilder.scala:110)
        at scala.collection.mutable.ArrayBuilder$ofRef.resize(ArrayBuilder.scala:114)
        at scala.collection.mutable.ArrayBuilder.ensureSize(ArrayBuilder.scala:39)
        at scala.collection.mutable.ArrayBuilder.addAll(ArrayBuilder.scala:53)
        at scala.collection.mutable.ArrayBuilder.addAll(ArrayBuilder.scala:49)
        at scalasql.core.SqlStr$.rec$1(SqlStr.scala:102)
        at scalasql.core.SqlStr$.rec$1(SqlStr.scala:120)
        at scalasql.core.SqlStr$.rec$1(SqlStr.scala:120)
        at scalasql.core.SqlStr$.flatten(SqlStr.scala:134)
        at scalasql.query.SimpleSelect$Renderer.render(SimpleSelect.scala:271)
        at scalasql.query.CompoundSelect$Renderer.render(CompoundSelect.scala:119)
        at scalasql.query.Select.renderSql(Select.scala:193)
        at scalasql.query.Select.renderSql$(Select.scala:190)
        at scalasql.query.CompoundSelect.renderSql(CompoundSelect.scala:10)
        at scalasql.core.SqlStr$Renderable$.renderSql(SqlStr.scala:218)
        at scalasql.query.Query$Single.renderSql(Query.scala:83)
        at scalasql.query.Query$QueryQueryable.renderSql(Query.scala:72)
        at scalasql.query.Query$QueryQueryable.renderSql(Query.scala:66)
        at scalasql.core.DbApi$.unpackQueryable(DbApi.scala:112)
        at scalasql.core.DbApi$.renderSql(DbApi.scala:119)
        at scalasql.core.DbApi$Impl.renderSql(DbApi.scala:424)

Nested/Embedded datastructures possible?

Thank you for creating this!

Would it be possible to support nested or "embedded" case classes within a case class representation of a table?

non compiling example:

case class Customer[T[_]](
    name: T[String],
    address: T[Address[T[_]]]
)

object Customer extends Table[Customer]

case class Address[T[_]](
    street: T[String],
    city: T[String],
    zipCode: T[String],
    country: T[String]
)

the expected outcome in this case would be that the address value is treated as if it were flat in the customer table. this would allow sharing or reusing data structures in different contexts.

a probably better example, would be MonetaryAmount with a value and currency component. its used a lot in different places if an app commits to storing values with currency.

Maybe there is some way to hook into something like a CompositeTypeMapper where one can supply custom givens analogous to the TypeMapper typeclass

for reference, I asked a somewhat similar question in another repo: AugustNagro/magnum#24 obviously, it's tough for me to let go of that feature after years of using java's hibernate :)

Using the H2Dialect, how does one retrieve auto-generated keys?

Discussed in #7

Originally posted by aboisvert April 10, 2024
The H2Dialect doesn't support the 'returning' clause, so I'm wondering what's the recommended way to retrieve auto-generated keys? (e.g. upon insert, it's typical to retrieve a row's generated primary key such as id)

This is typically done at a low-level through JdbcStatement.getGeneratedKeys but I don't see this capability available through the scalasql abstractions.
https://h2database.com/javadoc/org/h2/jdbc/JdbcStatement.html#getGeneratedKeys--

Invalid published scalasql-core POM file (Error resolving scalasql_2.13 0.1.0)

I added the scalasql dependency to my sbt build,

    libraryDependencies ++= Seq(
      "com.lihaoyi" % "scalasql_2.13" % "0.1.0",
      ...
   )

And when SBT tries to resolve it, I get the following error:

sbt:webapp> compile
[info] Updating 
https://repo1.maven.org/maven2/com/lihaoyi/scalasql_2.13/0.1.0/scalasql_2.13-0.1.0.pom
  100.0% [##########] 2.5 KiB (8.5 KiB / s)
https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-slf4j-impl/2.20.0/log4j-slf4j-impl-2.20.0.pom
  100.0% [##########] 7.5 KiB (78.9 KiB / s)
https://repo1.maven.org/maven2/com/lihaoyi/scalasql-operations_2.13/0.1.0/scalasql-operations_2.13-0.1.0.pom
  100.0% [##########] 1.6 KiB (14.2 KiB / s)
https://repo1.maven.org/maven2/com/lihaoyi/scalasql-query_2.13/0.1.0/scalasql-query_2.13-0.1.0.pom
  100.0% [##########] 1.6 KiB (6.3 KiB / s)
https://repo1.maven.org/maven2/com/lihaoyi/scalasql-core_2.13/0.1.0/scalasql-core_2.13-0.1.0.pom
  100.0% [##########] 2.2 KiB (17.5 KiB / s)
[info] Resolved  dependencies
[warn] 
[warn] 	Note: Unresolved dependencies path:
[error] stack trace is suppressed; run last update for the full output
[error] (update) sbt.librarymanagement.ResolveException: Error downloading org.scala-lang:scala-reflect:scalasql[2.13.12].scalaVersion
[error]   Not found
[error]   Not found
[error]   not found: /home/boisvert/.ivy2/local/org.scala-lang/scala-reflect/scalasql[2.13.12].scalaVersion/ivys/ivy.xml
[error]   not found: https://repo1.maven.org/maven2/org/scala-lang/scala-reflect/scalasql[2.13.12].scalaVersion/scala-reflect-scalasql[2.13.12].scalaVersion.pom
[error] Total time: 1 s, completed Apr 5, 2024, 6:36:08 PM

Indeed when inspecting the scalasql-core POM file at https://repo1.maven.org/maven2/com/lihaoyi/scalasql-core_2.13/0.1.0/scalasql-core_2.13-0.1.0.pom it looks like it contains the following ill-formed dependency:

        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>scalasql[2.13.12].core.scalaVersion</version>
        </dependency>

Scala reflect error

I tried importing the dependency using maven, I get this error "Could not find artifact org.scala-lang:scala-reflect:pom:scalasql[2.13.12].scalaVersion in central (https://repo.maven.apache.org/maven2)",
-tried different 2.13.x versions
-tried all the different versions of scalasql , Can I get some insights on how to fix this?

Add support for Microsoft SQL server (500USD Bounty)


From the maintainer Li Haoyi: I'm putting a 500USD bounty on this issue, payable by bank transfer on a merged PR implementing this.


This should be relatively straightforward to add as a new dialect next to Postgres, MySQL, H2, and Sqlite. It should cover as much of the ScalaSQL API as possible, with exceptions or unsupported functionality marked with explanation comments in https://github.com/com-lihaoyi/scalasql/blob/main/scalasql/test/src/ConcreteTestSuites.scala. Tests should follow the existing pattern, and I believe TestContainers which we use for postgres and mysql should already have SQL server containers ready to go

Compilation fails with Scala 3 -Xcheck-macros

When I compile my project with Scala 3.4.2 and the -Xcheck-macros compiler option, I get an error for every Table in my domain model:

{code}
[error] -- Error: /home/boisvert/fanstake/git/monorepo/webapp/src/main/scala/fanstake/model/views/RecruitsCurrentSeason.scala:24:42
[error] 24 | extends Table[RecruitCurrentSeasonRow]
[error] | ^
[error] |Exception occurred while executing macro expansion.
[error] |scala.quoted.runtime.impl.ScopeException: Expression created in a splice was used outside of that splice.
[error] |Created in: scalasql/query/src-3/TableMacro.scala:111 at column 15
[error] |Used in: scalasql/query/src-3/TableMacro.scala:160 at column 75
[error] |Type: [T] =>> T
[error] |
[error] |
[error] |Creation stack:
[error] | scalasql/query/src-3/TableMacro.scala:111 at column 15
[error] | scalasql/query/src-3/TableMacro.scala:160 at column 75
[error] |
[error] |
[error] |Use stack:
[error] | scalasql/query/src-3/TableMacro.scala:160 at column 75
[error] |
[error] |
[error] |Hint: A common reason for this to happen is when a def that creates a '{...}
[error] | captures an outer instance of Quotes. If this def is called in a splice
[error] | it will not track the Quotes provided by that particular splice.
[error] | To fix it add a given Quotes to this def.
[error] |
[error] | at scala.quoted.runtime.impl.ScopeException$.checkInCorrectScope(ScopeException.scala:35)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.quotedTypeToTree(PickledQuotes.scala:48)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.$anonfun$1(PickledQuotes.scala:178)
[error] | at scala.collection.Iterator$$anon$9.next(Iterator.scala:584)
[error] | at scala.collection.mutable.Growable.addAll(Growable.scala:62)
[error] | at scala.collection.mutable.Growable.addAll$(Growable.scala:57)
[error] | at scala.collection.immutable.MapBuilderImpl.addAll(Map.scala:710)
[error] | at scala.collection.immutable.Map$.from(Map.scala:661)
[error] | at scala.collection.IterableOnceOps.toMap(IterableOnce.scala:1320)
[error] | at scala.collection.IterableOnceOps.toMap$(IterableOnce.scala:1319)
[error] | at scala.collection.AbstractIterator.toMap(Iterator.scala:1300)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.spliceTypes(PickledQuotes.scala:180)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.unpickleTypeTree(PickledQuotes.scala:97)
[error] | at scala.quoted.runtime.impl.QuotesImpl.unpickleTypeV2(QuotesImpl.scala:3228)
[error] | at scalasql.query.TableMacros$.constructV$1(TableMacro.scala:42)
[error] | at scalasql.query.TableMacros$.$anonfun$3$$anonfun$2(TableMacro.scala:113)
[error] | at scalasql.query.TableMacros$.$anonfun$3(TableMacro.scala:107)
[error] | at scalasql.query.TableMacros$.$anonfun$adapted$3(TableMacro.scala:123)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:111)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1568)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1600)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.loop$2(tpd.scala:1254)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformStats(tpd.scala:1254)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformBlock(tpd.scala:1259)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1548)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1544)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform$$anonfun$1(Trees.scala:1640)
[error] | at scala.collection.immutable.List.mapConserve(List.scala:472)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1640)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1534)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformBlock$$anonfun$1$$anonfun$1(tpd.scala:1259)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.loop$2(tpd.scala:1241)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformStats(tpd.scala:1254)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformBlock(tpd.scala:1259)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1548)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1600)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.loop$2(tpd.scala:1254)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformStats(tpd.scala:1254)
[error] | at dotty.tools.dotc.ast.tpd$TreeMapWithPreciseStatContexts.transformBlock(tpd.scala:1259)
[error] | at dotty.tools.dotc.ast.Trees$Instance$TreeMap.transform(Trees.scala:1548)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$$anon$1.transform(PickledQuotes.scala:136)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.spliceTerms(PickledQuotes.scala:153)
[error] | at dotty.tools.dotc.quoted.PickledQuotes$.unpickleTerm(PickledQuotes.scala:89)
[error] | at scala.quoted.runtime.impl.QuotesImpl.unpickleExprV2(QuotesImpl.scala:3220)
[error] | at scalasql.query.TableMacros$.applyImpl(TableMacro.scala:123)
[error] |
[error] |---------------------------------------------------------------------------
[error] |Inline stack trace
[error] |- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[error] |This location contains code that was inlined from TableMacro.scala:160
[error] ---------------------------------------------------------------------------
{code}

Without this compiler option, everything works fine. However, I want to compile with -Xcheck-macros to debug some other macro issue (unrelated to scalasql).

Imported custom TypeMapper doesn't work in Scala 2.13.14

Example code below. Problem goes away with Scala 3.4.2

lihaoyi scalasql-test$ cat ./project/build.properties

sbt.version=1.9.9
lihaoyi scalasql-test$ cat ./build.sbt

val scalasqlTest = (project in file("."))
  .settings(
    scalaVersion := "2.13.14",
    libraryDependencies ++= Seq(
      "com.lihaoyi" %% "scalasql" % "0.1.4",
    )
  )
lihaoyi scalasql-test$ cat ./src/main/scala/test.scala

import scalasql.{Expr, Sc, Table}
import java.sql.{JDBCType, PreparedStatement, ResultSet}
import scalasql.core.{Queryable, SqlStr, TypeMapper}
import java.util.Date

object UtilDate {

  implicit val UtilDateMapper: TypeMapper[Date] = new UtilDateType
  class UtilDateType extends TypeMapper[Date] {
    def jdbcType = JDBCType.TIMESTAMP
    def get(r: ResultSet, idx: Int) = new Date(r.getDate(idx).getTime)
    def put(r: PreparedStatement, idx: Int, v: Date) =
      r.setDate(idx, new java.sql.Date(v.getTime))
  }
}
import UtilDate._

case class TimeInterval[F[_]](
    beginTime: F[Date],
)
object TimeInterval extends Table[TimeInterval]

object Main extends App {
  println("hallo")
}
lihaoyi scalasql-test$ sbt compile

[info] welcome to sbt 1.9.9 (Homebrew Java 22.0.1)
[info] loading project definition from /Users/lihaoyi/Downloads/scalasql-test/project
[info] loading settings for project scalasqlTest from build.sbt ...
[info] set current project to scalasqlTest (in build file:/Users/lihaoyi/Downloads/scalasql-test/)
[info] Executing in batch mode. For better performance use sbt's shell
[info] compiling 1 Scala source to /Users/lihaoyi/Downloads/scalasql-test/target/scala-2.13/classes ...
[error] /Users/lihaoyi/Downloads/scalasql-test/src/main/scala/test.scala:21:29: could not find implicit value for parameter e: scalasql.Queryable.Row[scalasql.core.Expr[java.util.Date],java.util.Date]
[error] object TimeInterval extends Table[TimeInterval]
[error]                             ^
[error] one error found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed 11 Aug 2024, 9:36:21 am

Scala 3 Support (1000USD Bounty)

Currently ScalaSql only supports Scala 2.13. Getting it working on Scala 3 would involve porting the table macros, which is one ~100 line file https://github.com/com-lihaoyi/scalasql/blob/main/scalasql/query/src-2/TableMacro.scala. I don't have the knowledge of Scala 3 macros to do it myself, but it shouldn't be hard to port, and the rest of the ScalaSql library should "just work".

To incentivize contribution, I'm putting a 1000USD bounty on resolving this ticket to the first person who can get all the ScalaSql tests passing on Scala 3. This is payable via bank transfer, and at my discretion in case of ambiguity.

onConflictUpdate() does not apply database-level column naming convention

Here is a relevant stacktrace:

[error] org.postgresql.util.PSQLException: ERROR: column "athleteid" does not exist
[error]   Hint: Perhaps you meant to reference the column "recruit_rosters.athlete_id" or the column "excluded.athlete_id".
[error]   Position: 158
[error] 	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713)
[error] 	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401)
[error] 	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368)
[error] 	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)
[error] 	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)
[error] 	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
[error] 	at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152)
[error] 	at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
[error] 	at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
[error] 	at scalasql.core.DbApi$.scalasql$core$DbApi$Impl$$_$runRawUpdate0$$anonfun$3(DbApi.scala:456)
[error] 	at scalasql.core.DbApi$Impl.configureRunCloseStatement(DbApi.scala:504)
[error] 	at scalasql.core.DbApi$Impl.runRawUpdate0(DbApi.scala:456)
[error] 	at scalasql.core.DbApi$Impl.updateSql(DbApi.scala:284)
[error] 	at scalasql.core.DbApi$Impl.run(DbApi.scala:203)
[error] 	at fanstake.datasync.RecruitsLoader$.insertOrUpdateRecruitRosters$$anonfun$1(RecruitsLoader.scala:307)

Note that column athleteid should actually be athlete_id. This field athleteId (as it is named in the case class) is properly converted to snake_case in other queries, just not in the ON CONFLICT clause.

Here's the relevant part of the code:

        RecruitRosters.insert
          .columns(
            _.athleteId := r.athleteId,
            _.seasonId := r.seasonId,
            _.teamId := r.teamId,
            _.recruitingStatus := r.recruitingStatus,
            _.recruitingStatusDate := r.recruitingStatusDate,
            _.details := r.details
          )
          .onConflictUpdate(_.athleteId, _.seasonId)(
            _.teamId := r.teamId,
            _.recruitingStatus := r.recruitingStatus,
            _.recruitingStatusDate := r.recruitingStatusDate,
            _.details := r.details
          )

flatMap is unsafe

Did a lot of thinking recently in my own library, and came to the conclusion that I could not implement flatMap safely for all SQL dialects. Wondered if ScalaSql was susceptible to the same, so I check, and yes. Here is an example of a query which will not run because of invalid syntax.

dbClient.renderSql(City.select.flatMap(c => City.select.sortBy(_.name).map(_ => c.id).crossJoin().map(r => r)))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.