Coder Social home page Coder Social logo

starburstdata / metabase-driver Goto Github PK

View Code? Open in Web Editor NEW
57.0 50.0 11.0 150 KB

Starburst Metabase driver

License: Apache License 2.0

Makefile 4.45% Clojure 94.26% Dockerfile 0.81% Shell 0.48%
metabase starburst jdbc trino buisness-intelligence

metabase-driver's Introduction

Metabase Starburst Driver

Driver Plugin Installation

The Starburst Driver .jar binary can be downloaded from our releases in github. Once it is downloaded, it must be stored in the plugins directory within the Metabase working dir. See: https://www.metabase.com/docs/latest/developers-guide-drivers.html#how-to-use-a-third-party-driver

App Supported Versions

Please see supported versions for this release.

Developers Guide Drivers

Our build scripts will git clone the Metabase repo to ./metabase and copy all Starburst driver files into the expected folder: ./metabase/modules/drivers/starburst. From here, we leverage the Metabase build scripts to build our .jar file into the correct dir, run tests, and start the local server.

Prerequisites

Quick Start

Run make build test to build and run tests locally. If everything passes, you're good to go!

Building

Run make build to clone the Metabase repo and build all required components. This command should not need to be run again unless you manually git pull new changes into your local Metabase repo.

Force Updating Metabase Repo

You can manually run git commands to update the cloned Metabase repo, but if you want to remove the local Metabase repo and re-clone with the latest changes, make clean

Starting Local Server

Once you have built all required resources with the make build command, run make server. This command starts a local Metabase server on port 3000. If you want to build the driver and then start a server, run make driver server.

Testing Driver

Once you have built all required resources with the make build command, run make test. This command builds your local driver changes and then starts Starburst driver tests.

Staring a Trino Server in Docker

Running make test will start a Trino server for you on port 8082 when needed, but if you want to start one, you can run make start_trino_if_missing. Run make testOptimized to test Metabase with the "Optimized prepared statements" flag on.

Note: Running make test will populate the Trino catalogs with mock data that is used for testing. You can then connect Metabase to this Trino server to view that data. This is useful for manual testing.

Executing Specific Tests

You can cd into the metabase repo and run commands like: DRIVERS=starburst clojure -X:dev:drivers:drivers-dev:test :only metabase.query-processor-test.timezones-test/filter-test DRIVERS=starburst clojure -J-DexplicitPrepare=false -X:dev:drivers:drivers-dev:test :only metabase.query-processor-test.timezones-test/filter-test

or even

DRIVERS=starburst clojure -X:dev:drivers:drivers-dev:test :only metabase.query-processor-test.timezones-test

Releasing Driver

To create a release from the main branch follow the below steps.

1.) Create a Release Branch

Head to actions and run the Create Release Branch workflow entering in the correct semantic version, and submit the workflow. This will create a new PR that contains promoted CHANGELOG.md and updated version info. Review the PR and merge once checks pass.

2.) Create the Release Artifact

Head to actions and run the Release workflow entering the same the same semantic version, and submit the release workflow. When this workflow completes, it will create a new tagged github release with the provided version and upload the .jar binary for clients to consume.

Update Metabase Version

If needed, make checkout_latest_metabase_tag will update Metabase to its latest tagged release.

CAUTION: the Metabase test file metabase/test/metabase/driver_test.clj is overridden by a modified version on the root directory (see the Makefile). This is because two tests (can-connect-with-destroy-db-test and check-can-connect-before-sync-test) do not work with the Starburst driver as they're testing what happens when a database is deleted (which cannot happen with Starburst). So instead of adding some useless stuff to can-connect? for the sole purpose of satisfying tests, it was found preferable to just remove those two tests.

Whenever upgrading the version of Metabase, ./driver_test.clj should be replaced with metabase/test/metabase/driver_test.clj with the two offending tests removed (unless they pass or there is a clean way around them)

References

Contact

For feature requests and bugs, please create an issue in this repo. For further support, see the following resources:

metabase-driver's People

Contributors

andrewdibiasio6 avatar jeancmsantana avatar lpoulain avatar maiquu avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metabase-driver's Issues

Date Difference is not working

When we use the newly added date diff xustom expression, we get an trino error trying to use AT TIME ZONE on a DATE data type which is not supported.

Here is the query sent to trino:

-- Metabase:: userID: 1 queryType: MBQL queryHash: 4243acab790001d0d88dc444c8ae5a88f665cdd88d1157ea3836b820a8407964

SELECT

    "source"."orderkey" AS "orderkey",

    "source"."custkey" AS "custkey",

    "source"."orderstatus" AS "orderstatus",

    "source"."totalprice" AS "totalprice",

    "source"."orderdate" AS "orderdate",

    "source"."orderpriority" AS "orderpriority",

    "source"."clerk" AS "clerk",

    "source"."shippriority" AS "shippriority",

    "source"."comment" AS "comment",

    "source"."datetimeDiff" AS "datetimeDiff"

FROM

    (

        SELECT

            "sf1"."orders"."orderkey" AS "orderkey",

            "sf1"."orders"."custkey" AS "custkey",

            "sf1"."orders"."orderstatus" AS "orderstatus",

            "sf1"."orders"."totalprice" AS "totalprice",

            "sf1"."orders"."orderdate" AS "orderdate",

            "sf1"."orders"."orderpriority" AS "orderpriority",

            "sf1"."orders"."clerk" AS "clerk",

            "sf1"."orders"."shippriority" AS "shippriority",

            "sf1"."orders"."comment" AS "comment",

            date_diff(

                'day',

                date("sf1"."orders"."orderdate" AT TIME ZONE 'GMT'),

                date("sf1"."orders"."orderdate" AT TIME ZONE 'GMT')

            ) AS "datetimeDiff"

        FROM

            "sf1"."orders"

    ) "source"

LIMIT

    2000

"sf1"."orders"."orderdate" is a date type

error

io.trino.spi.TrinoException: line 1:809: Type of value must be a time or timestamp with or without time zone (actual varchar(20))
	at io.trino.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:48)
	at io.trino.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:43)
	at io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.visitAtTimeZone(ExpressionAnalyzer.java:1968)
	at io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.visitAtTimeZone(ExpressionAnalyzer.java:581)
	at io.trino.sql.tree.AtTimeZone.accept(AtTimeZone.java:62)
	...

I think we can drop the casting here

See Trino's date interval functions: https://trino.io/docs/current/functions/datetime.html#interval-functions

Valid queries in Trino:

SELECT date_diff('hour', DATE '2020-02-29', TIMESTAMP '2020-03-01 02:00');

SELECT date_diff('hour', TIMESTAMP '2020-03-01', TIMESTAMP '2020-03-01 02:00');

SELECT date_diff('day', DATE '2020-02-29', DATE '2020-03-01');

Success Criteria

  • A new unit test is added to capture this issue for date
  • Tests pass and manually testing passes as well.

Upgrade to Metabase 1.46.4

Now that Metabase 1.46.4 (which disables the problematic test for Starburst) has been released, upgrade to that version

Trying to cast timestamp when report timezone is set

Timestamp are showing up always as set report timezone even when casted using at time zone.

Metabase report timezone is set to US/Pacific, results are correct for US/Pacific. However, does not show up correctly when I want to cast it back to UTC or any other time zone. The same thing if at_timezone function is used

Expected behavior:
_col1 and _col2 to show up as '2012-10-31 01:00 UTC' and '2022-01-01 00:00:00 UTC' respectively

Screen Shot 2022-11-22 at 11 22 54 PM

Metabase can't sync tables

(As reported in metabase/metabase#23665)

Describe the bug
Right after connecting Trino to Metabase, "Syncing tables..." has been in the bottom-right corner for days. When browsing data, "This database doesn't have any tables". However, queries work fine.

Logs

2022-07-01 11:25:00,017 INFO sync.util :: ๏ฟฝ[35mSTARTING: Sync metadata for starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,018 INFO driver.impl :: ๏ฟฝ[33mInicializando driver :starburst...๏ฟฝ[0m
2022-07-01 11:25:00,019 INFO plugins.classloader :: Added URL file:/plugins/starburst-1.0.4.metabase-driver.jar to classpath
2022-07-01 11:25:00,021 DEBUG plugins.init-steps :: ๏ฟฝ[34mCarregando namespace de plugin metabase.driver.starburst...๏ฟฝ[0m
2022-07-01 11:25:00,035 INFO driver.impl :: ๏ฟฝ[34mDriver abstrato registrado :metabase.driver.sql-jdbc.execute.legacy-impl/use-legacy-classes-for-read-and-set๏ฟฝ[0m  ๐Ÿšš
2022-07-01 11:25:00,043 INFO driver.impl :: ๏ฟฝ[34mDriver registrado :starburst๏ฟฝ[0m (pais: [:metabase.driver.sql-jdbc.execute.legacy-impl/use-legacy-classes-for-read-and-set]) ๐Ÿšš
2022-07-01 11:25:00,176 DEBUG plugins.jdbc-proxy :: ๏ฟฝ[34mRegistrando driver de proxy JDBC para io.trino.jdbc.TrinoDriver...๏ฟฝ[0m
2022-07-01 11:25:00,180 INFO metabase.util :: ๏ฟฝ[32m๏ฟฝ[35mCarregando Metabase driver :starburst๏ฟฝ[0m took 159.7 ms๏ฟฝ[0m
2022-07-01 11:25:00,181 INFO sync.util :: ๏ฟฝ[35mSTARTING: passo 'sync-timezone' para starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,181 INFO driver.impl :: ๏ฟฝ[33mInicializando driver :metabase.driver.sql-jdbc.execute.legacy-impl/use-legacy-classes-for-read-and-set...๏ฟฝ[0m
2022-07-01 11:25:00,922 INFO sync.util :: ๏ฟฝ[35mFINISHED: passo 'sync-timezone' para starburst Banco de dados 6 'Trino' (740.8 ms)๏ฟฝ[0m
2022-07-01 11:25:00,923 INFO sync.util :: ๏ฟฝ[35mSTARTING: passo 'sync-tables' para starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,938 INFO sync.util :: ๏ฟฝ[35mFINISHED: passo 'sync-tables' para starburst Banco de dados 6 'Trino' (14.6 ms)๏ฟฝ[0m
2022-07-01 11:25:00,939 INFO sync.util :: ๏ฟฝ[35mSTARTING: passo 'sync-fields' para starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,941 INFO sync.util :: ๏ฟฝ[35mFINISHED: passo 'sync-fields' para starburst Banco de dados 6 'Trino' (1.7 ms)๏ฟฝ[0m
2022-07-01 11:25:00,941 INFO sync.util :: ๏ฟฝ[35mSTARTING: passo 'sync-fks' para starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,942 INFO sync.util :: ๏ฟฝ[35mFINISHED: passo 'sync-fks' para starburst Banco de dados 6 'Trino' (1.2 ms)๏ฟฝ[0m
2022-07-01 11:25:00,943 INFO sync.util :: ๏ฟฝ[35mSTARTING: passo 'sync-metabase-metadata' para starburst Banco de dados 6 'Trino'๏ฟฝ[0m
2022-07-01 11:25:00,961 WARN sync.util :: Error syncing _metabase_metadata table for starburst Banco de dados 6 'Trino'
java.sql.SQLException: Query failed (#20220701_142500_00059_xr5qr): line 1:43: mismatched input '<EOF>'. Expecting: <identifier>
	at io.trino.jdbc.AbstractTrinoResultSet.resultsException(AbstractTrinoResultSet.java:1937)
	at io.trino.jdbc.TrinoResultSet.getColumns(TrinoResultSet.java:285)
	at io.trino.jdbc.TrinoResultSet.create(TrinoResultSet.java:61)
	at io.trino.jdbc.TrinoStatement.internalExecute(TrinoStatement.java:262)
	at io.trino.jdbc.TrinoStatement.execute(TrinoStatement.java:240)
	at io.trino.jdbc.TrinoPreparedStatement.<init>(TrinoPreparedStatement.java:119)
	at io.trino.jdbc.TrinoConnection.prepareStatement(TrinoConnection.java:159)
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:567)
	at clojure.java.jdbc$prepare_statement.invokeStatic(jdbc.clj:679)
	at clojure.java.jdbc$prepare_statement.invoke(jdbc.clj:626)
	at clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1105)
	at clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1093)
	at clojure.java.jdbc$reducible_query$reify__28497.reduce(jdbc.clj:1357)
	at clojure.core$transduce.invokeStatic(core.clj:6885)
	at clojure.core$into.invokeStatic(core.clj:6901)
	at clojure.core$into.invoke(core.clj:6889)
	at metabase.driver.implementation.sync$all_schemas.invokeStatic(sync.clj:115)
	at metabase.driver.implementation.sync$all_schemas.invoke(sync.clj:110)
	at metabase.driver.implementation.sync$fn__79308.invokeStatic(sync.clj:126)
	at metabase.driver.implementation.sync$fn__79308.invoke(sync.clj:121)
	at clojure.lang.MultiFn.invoke(MultiFn.java:234)
	at metabase.sync.fetch_metadata$fn__64168$db_metadata__64173$fn__64174.invoke(fetch_metadata.clj:14)
	at metabase.sync.fetch_metadata$fn__64168$db_metadata__64173.invoke(fetch_metadata.clj:11)
	at metabase.sync.sync_metadata.metabase_metadata$fn__65145$sync_metabase_metadata_BANG___65150$fn__65151$fn__65152.invoke(metabase_metadata.clj:100)
	at metabase.sync.util$do_with_error_handling.invokeStatic(util.clj:158)
	at metabase.sync.util$do_with_error_handling.invoke(util.clj:151)
	at metabase.sync.sync_metadata.metabase_metadata$fn__65145$sync_metabase_metadata_BANG___65150$fn__65151.invoke(metabase_metadata.clj:93)
	at metabase.sync.sync_metadata.metabase_metadata$fn__65145$sync_metabase_metadata_BANG___65150.invoke(metabase_metadata.clj:88)
	at clojure.lang.AFn.applyToHelper(AFn.java:154)
	at clojure.lang.AFn.applyTo(AFn.java:144)
	at clojure.core$apply.invokeStatic(core.clj:669)
	at clojure.core$apply.invoke(core.clj:662)
	at metabase.sync.util$fn__41548$run_step_with_metadata__41553$fn__41557$fn__41559.doInvoke(util.clj:388)
	at clojure.lang.RestFn.invoke(RestFn.java:397)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invokeStatic(util.clj:100)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invoke(util.clj:94)
	at metabase.sync.util$with_start_and_finish_debug_logging.invokeStatic(util.clj:117)
	at metabase.sync.util$with_start_and_finish_debug_logging.invoke(util.clj:114)
	at metabase.sync.util$fn__41548$run_step_with_metadata__41553$fn__41557.invoke(util.clj:383)
	at metabase.sync.util$fn__41548$run_step_with_metadata__41553.invoke(util.clj:378)
	at metabase.sync.util$fn__41764$run_sync_operation__41769$fn__41770$fn__41778.invoke(util.clj:489)
	at metabase.sync.util$fn__41764$run_sync_operation__41769$fn__41770.invoke(util.clj:487)
	at metabase.sync.util$fn__41764$run_sync_operation__41769.invoke(util.clj:481)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506$fn__66507$fn__66508.invoke(sync_metadata.clj:50)
	at metabase.sync.util$do_with_error_handling.invokeStatic(util.clj:158)
	at metabase.sync.util$do_with_error_handling.invoke(util.clj:151)
	at clojure.core$partial$fn__5859.invoke(core.clj:2634)
	at metabase.driver$fn__32609.invokeStatic(driver.clj:592)
	at metabase.driver$fn__32609.invoke(driver.clj:592)
	at clojure.lang.MultiFn.invoke(MultiFn.java:239)
	at metabase.sync.util$sync_in_context$fn__41457.invoke(util.clj:136)
	at metabase.sync.util$with_db_logging_disabled$fn__41454.invoke(util.clj:127)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invokeStatic(util.clj:100)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invoke(util.clj:94)
	at metabase.sync.util$with_start_and_finish_logging$fn__41443.invoke(util.clj:112)
	at metabase.sync.util$with_sync_events$fn__41438.invoke(util.clj:86)
	at metabase.sync.util$with_duplicate_ops_prevented$fn__41429.invoke(util.clj:65)
	at metabase.sync.util$do_sync_operation.invokeStatic(util.clj:179)
	at metabase.sync.util$do_sync_operation.invoke(util.clj:176)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506$fn__66507.invoke(sync_metadata.clj:49)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506.invoke(sync_metadata.clj:46)
	at metabase.task.sync_databases$sync_and_analyze_database_BANG_.invokeStatic(sync_databases.clj:68)
	at metabase.task.sync_databases$sync_and_analyze_database_BANG_.invoke(sync_databases.clj:59)
	at metabase.task.sync_databases.SyncAndAnalyzeDatabase.execute(sync_databases.clj:76)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: io.trino.sql.parser.ParsingException: line 1:43: mismatched input '<EOF>'. Expecting: <identifier>
	at io.trino.sql.parser.ErrorHandler.syntaxError(ErrorHandler.java:109)
	at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)
	at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)
	at org.antlr.v4.runtime.DefaultErrorStrategy.reportInputMismatch(DefaultErrorStrategy.java:327)
	at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:139)
	at io.trino.sql.parser.SqlBaseParser.identifier(SqlBaseParser.java:19791)
	at io.trino.sql.parser.SqlBaseParser.statement(SqlBaseParser.java:4348)
	at io.trino.sql.parser.SqlBaseParser.statement(SqlBaseParser.java:4769)
	at io.trino.sql.parser.SqlBaseParser.singleStatement(SqlBaseParser.java:320)
	at io.trino.sql.parser.SqlParser.invokeParser(SqlParser.java:145)
	at io.trino.sql.parser.SqlParser.createStatement(SqlParser.java:85)
	at io.trino.execution.QueryPreparer.prepareQuery(QueryPreparer.java:55)
	at io.trino.dispatcher.DispatchManager.createQueryInternal(DispatchManager.java:180)
	at io.trino.dispatcher.DispatchManager.lambda$createQuery$0(DispatchManager.java:149)
	at io.trino.$gen.Trino_387____20220629_024228_2.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.antlr.v4.runtime.InputMismatchException
	at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)
	at io.trino.sql.parser.SqlBaseParser.identifier(SqlBaseParser.java:19553)
	... 12 more
2022-07-01 11:25:00,970 INFO sync.util :: ๏ฟฝ[35mFINISHED: passo 'sync-metabase-metadata' para starburst Banco de dados 6 'Trino' (27.2 ms)๏ฟฝ[0m
2022-07-01 11:25:00,971 WARN sync.util :: Erro no passo de sincronizaรงรฃo: Sync metadata for starburst Banco de dados 6 'Trino'
java.lang.ClassCastException: class java.sql.SQLException cannot be cast to class clojure.lang.Associative (java.sql.SQLException is in module java.sql of loader 'platform'; clojure.lang.Associative is in unnamed module of loader 'app')
	at clojure.lang.RT.assoc(RT.java:827)
	at clojure.core$assoc__5433.invokeStatic(core.clj:195)
	at clojure.core$assoc__5433.doInvoke(core.clj:192)
	at clojure.lang.RestFn.invoke(RestFn.java:573)
	at metabase.sync.util$fn__41548$run_step_with_metadata__41553$fn__41557.invoke(util.clj:394)
	at metabase.sync.util$fn__41548$run_step_with_metadata__41553.invoke(util.clj:378)
	at metabase.sync.util$fn__41764$run_sync_operation__41769$fn__41770$fn__41778.invoke(util.clj:489)
	at metabase.sync.util$fn__41764$run_sync_operation__41769$fn__41770.invoke(util.clj:487)
	at metabase.sync.util$fn__41764$run_sync_operation__41769.invoke(util.clj:481)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506$fn__66507$fn__66508.invoke(sync_metadata.clj:50)
	at metabase.sync.util$do_with_error_handling.invokeStatic(util.clj:158)
	at metabase.sync.util$do_with_error_handling.invoke(util.clj:151)
	at clojure.core$partial$fn__5859.invoke(core.clj:2634)
	at metabase.driver$fn__32609.invokeStatic(driver.clj:592)
	at metabase.driver$fn__32609.invoke(driver.clj:592)
	at clojure.lang.MultiFn.invoke(MultiFn.java:239)
	at metabase.sync.util$sync_in_context$fn__41457.invoke(util.clj:136)
	at metabase.sync.util$with_db_logging_disabled$fn__41454.invoke(util.clj:127)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invokeStatic(util.clj:100)
	at metabase.sync.util$with_start_and_finish_logging_STAR_.invoke(util.clj:94)
	at metabase.sync.util$with_start_and_finish_logging$fn__41443.invoke(util.clj:112)
	at metabase.sync.util$with_sync_events$fn__41438.invoke(util.clj:86)
	at metabase.sync.util$with_duplicate_ops_prevented$fn__41429.invoke(util.clj:65)
	at metabase.sync.util$do_sync_operation.invokeStatic(util.clj:179)
	at metabase.sync.util$do_sync_operation.invoke(util.clj:176)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506$fn__66507.invoke(sync_metadata.clj:49)
	at metabase.sync.sync_metadata$fn__66501$sync_db_metadata_BANG___66506.invoke(sync_metadata.clj:46)
	at metabase.task.sync_databases$sync_and_analyze_database_BANG_.invokeStatic(sync_databases.clj:68)
	at metabase.task.sync_databases$sync_and_analyze_database_BANG_.invoke(sync_databases.clj:59)
	at metabase.task.sync_databases.SyncAndAnalyzeDatabase.execute(sync_databases.clj:76)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2022-07-01 11:25:00,971 INFO sync.util :: ๏ฟฝ[35mFINISHED: Sync metadata for starburst Banco de dados 6 'Trino' (954.0 ms)๏ฟฝ[0m
2022-07-01 11:25:00,972 INFO sync.util :: ๏ฟฝ[35mSTARTING: Analyze data for starburst Banco de dados 6 'Trino'๏ฟฝ[0m

To Reproduce
Steps to reproduce the behavior:

  1. Install Trino plugin from Starburst
  2. Add a Trino data source
  3. Notice the Syncing warning in the bottom-right corner that never ends
  4. Click in "Browse data"
  5. Select the Trino database
  6. No tables are displayed.

Expected behavior
Database sync process finishes and tables can be browsed.

Screenshots
image

Information about your Metabase Installation:

You can get this information by going to Admin -> Troubleshooting.

  • Your browser and the version: Firefox 102.0
  • Your operating system: Lubuntu 22.04
  • Your databases: MySQL, Postgres
  • Metabase version: 0.43.4
  • Metabase hosting environment: Kubernetes
  • Metabase internal database: Postgres
  • Starburst Trino plugin: 1.0.4

Severity
Only native SQL queries are working. Thus, the majority of our users can't use Metabase.

Upgrade to Metabase 1.48

Make sure the driver works with Metabase 1.48.0, and that all deprecated methods have been replaced

[[and {{ds}}]] not supported

{:sql
"-- Metabase:: userID: 153 queryType: native queryHash: 8a52ef18dd2c9c71fb33736dfb5dd43b9da3492fedf415eec5b90f8cd556f4df\nselect \n-- count(1) \n* \nfrom mysql_bi.luoge_bi_dim.ds_info_3 where 1=1 and from_iso8601_timestamp(?)",
:params ["2022-11-15T00:00:00+08:00"],
:type :invalid-query}}],
:error_type :invalid-query,
:json_query
{:type "native",
:native
{:query "select \n-- count(1) \n* \nfrom mysql_bi.luoge_bi_dim.ds_info_3 where 1=1 [[and {{ds}}]]",
:template-tags
{:ds
{:id "f7c63f44-4196-57c4-7026-a26bbc006ede",
:name "ds",
:display-name "Ds",
:type "date",
:default "2022-11-15",
:required true}}},
:database 89,
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:status :failed,
:class io.trino.jdbc.$internal.client.FailureInfo$FailureException,
:stacktrace
["io.trino.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:48)"
"io.trino.sql.analyzer.SemanticExceptions.semanticException(SemanticExceptions.java:43)"
"io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.coerceType(ExpressionAnalyzer.java:3187)"
"io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.coerceType(ExpressionAnalyzer.java:3196)"
"io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.visitLogicalExpression(ExpressionAnalyzer.java:787)"
"io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.visitLogicalExpression(ExpressionAnalyzer.java:581)"
"io.trino.sql.tree.LogicalExpression.accept(LogicalExpression.java:80)"
"io.trino.sql.tree.StackableAstVisitor.process(StackableAstVisitor.java:27)"
"io.trino.sql.analyzer.ExpressionAnalyzer$Visitor.process(ExpressionAnalyzer.java:604)"
"io.trino.sql.analyzer.ExpressionAnalyzer.analyze(ExpressionAnalyzer.java:485)"
"io.trino.sql.analyzer.ExpressionAnalyzer.analyzeExpression(ExpressionAnalyzer.java:3497)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.analyzeExpression(StatementAnalyzer.java:4110)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.analyzeWhere(StatementAnalyzer.java:3941)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.lambda$visitQuerySpecification$38(StatementAnalyzer.java:2459)"
"java.base/java.util.Optional.ifPresent(Optional.java:178)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:2459)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:456)"
"io.trino.sql.tree.QuerySpecification.accept(QuerySpecification.java:155)"
"io.trino.sql.tree.AstVisitor.process(AstVisitor.java:27)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:473)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:481)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:1405)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:456)"
"io.trino.sql.tree.Query.accept(Query.java:107)"
"io.trino.sql.tree.AstVisitor.process(AstVisitor.java:27)"
"io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:473)"
"io.trino.sql.analyzer.StatementAnalyzer.analyze(StatementAnalyzer.java:435)"
"io.trino.sql.analyzer.Analyzer.analyze(Analyzer.java:79)"
"io.trino.sql.analyzer.Analyzer.analyze(Analyzer.java:71)"
"io.trino.execution.SqlQueryExecution.analyze(SqlQueryExecution.java:261)"
"io.trino.execution.SqlQueryExecution.(SqlQueryExecution.java:199)"
"io.trino.execution.SqlQueryExecution$SqlQueryExecutionFactory.createQueryExecution(SqlQueryExecution.java:825)"
"io.trino.dispatcher.LocalDispatchQueryFactory.lambda$createDispatchQuery$0(LocalDispatchQueryFactory.java:135)"
"io.trino.$gen.Trino_394____20221027_023829_2.call(Unknown Source)"
"com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)"
"com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)"
"com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)"
"java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)"
"java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)"
"java.base/java.lang.Thread.run(Thread.java:833)"
nil],
:context :ad-hoc,
:error
"Query failed (#20221116_114500_00891_fkerh): line 4:20: Logical expression term must evaluate to a boolean (actual: timestamp(3) with time zone)",
:row_count 0,
:running_time 0,
:data {:rows [], :cols []}}

Driver fails with java.lang.NullPointerException on timestamp field with nulls

Hi, thank you for providing the driver to the community

I was trying it with Metabase v0.45.3 with Trino version 407 and I'm getting NullPointer on querying table with asterisk

{:type "native", :native {:query "select * from iceberg.raw.model_events\nwhere model_id = '556e8534-c549-4af8-843a-81bc9bc67802'\nand created_date >= date '2022-04-01'\nand created_date < date '2022-04-03'", :template-tags {}}, :database 33, :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :status :failed, :class java.lang.NullPointerException, :stacktrace ["clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:97)" "--> driver.implementation.execute$fn__81803$fn__81804.invoke(execute.clj:128)" "driver.sql_jdbc.execute$row_thunk$row_thunk_STAR___56080.invoke(execute.clj:456)" "query_processor.reducible$reducible_rows$reify__45528.reduce(reducible.clj:149)" "query_processor.context.default$default_reducef$fn__39686.invoke(default.clj:57)" "query_processor.context.default$default_reducef.invokeStatic(default.clj:56)" "query_processor.context.default$default_reducef.invoke(default.clj:48)" "query_processor.context$reducef.invokeStatic(context.clj:69)" "query_processor.context$reducef.invoke(context.clj:62)" "query_processor.context.default$default_runf$respond_STAR___39690.invoke(default.clj:68)" "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:506)" "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:485)" "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:493)" "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:485)" "driver.sql_jdbc$fn__88601.invokeStatic(sql_jdbc.clj:58)" "driver.sql_jdbc$fn__88601.invoke(sql_jdbc.clj:56)" "query_processor.context$executef.invokeStatic(context.clj:59)" "query_processor.context$executef.invoke(context.clj:48)" "query_processor.context.default$default_runf.invokeStatic(default.clj:67)" "query_processor.context.default$default_runf.invoke(default.clj:65)" "query_processor.context$runf.invokeStatic(context.clj:45)" "query_processor.context$runf.invoke(context.clj:39)" "query_processor.reducible$identity_qp.invokeStatic(reducible.clj:12)" "query_processor.reducible$identity_qp.invoke(reducible.clj:9)" "query_processor.middleware.cache$maybe_return_cached_results$maybe_return_cached_results_STAR___53600.invoke(cache.clj:220)" "query_processor.middleware.permissions$check_query_permissions$fn__49259.invoke(permissions.clj:109)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__52545.invoke(mbql_to_native.clj:23)" "query_processor$fn__55076$combined_post_process__55081$combined_post_process_STAR___55082.invoke(query_processor.clj:212)" "query_processor$fn__55076$combined_pre_process__55077$combined_pre_process_STAR___55078.invoke(query_processor.clj:209)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521$fn__53526.invoke(resolve_database_and_driver.clj:35)" "driver$do_with_driver.invokeStatic(driver.clj:76)" "driver$do_with_driver.invoke(driver.clj:72)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521.invoke(resolve_database_and_driver.clj:34)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__49525.invoke(fetch_source_query.clj:314)" "query_processor.middleware.store$initialize_store$fn__49715$fn__49716.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:45)" "query_processor.store$do_with_store.invoke(store.clj:39)" "query_processor.middleware.store$initialize_store$fn__49715.invoke(store.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__53793.invoke(normalize_query.clj:22)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__50803.invoke(constraints.clj:53)" "query_processor.middleware.process_userland_query$process_userland_query$fn__53732.invoke(process_userland_query.clj:145)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__54104.invoke(catch_exceptions.clj:167)" "query_processor.reducible$async_qp$qp_STAR___45514$thunk__45516.invoke(reducible.clj:100)" "query_processor.reducible$async_qp$qp_STAR___45514.invoke(reducible.clj:106)" "query_processor.reducible$async_qp$qp_STAR___45514.invoke(reducible.clj:91)" "query_processor.reducible$sync_qp$qp_STAR___45525.doInvoke(reducible.clj:126)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:331)" "query_processor$process_userland_query.doInvoke(query_processor.clj:327)" "query_processor$fn__55123$process_query_and_save_execution_BANG___55132$fn__55135.invoke(query_processor.clj:342)" "query_processor$fn__55123$process_query_and_save_execution_BANG___55132.invoke(query_processor.clj:335)" "query_processor$fn__55167$process_query_and_save_with_max_results_constraints_BANG___55176$fn__55179.invoke(query_processor.clj:354)" "query_processor$fn__55167$process_query_and_save_with_max_results_constraints_BANG___55176.invoke(query_processor.clj:347)" "api.dataset$run_query_async$fn__70529.invoke(dataset.clj:69)" "query_processor.streaming$streaming_response_STAR_$fn__40610$fn__40611.invoke(streaming.clj:162)" "query_processor.streaming$streaming_response_STAR_$fn__40610.invoke(streaming.clj:161)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)" "async.streaming_response$do_f_async$task__20021.invoke(streaming_response.clj:84)"], :card_id nil, :context :ad-hoc, :error nil, :row_count 0, :running_time 0, :data {:rows [], :cols []}}

It works well if columns are specified

Re-enable :foreign-keys feature flag

Even though Starburst doesn't support foreign keys, re-enable the :foreign-keys feature flag so that users can setup relationships in Metabase. Use the same tricks the Presto driver (which doesn't support foreign keys either) uses to pass tests

Driver fires queries twice on save

To reproduce:

  1. Connect Metabase with a postgres db 2 times, first one directly (without Trino in the middle), second one via Trino (as a catalog)
  2. Create a SQL question (e.g. with the sample dataset): select * from "public"."people" and save it without running
  3. now edit the SQL question (e.g. select "public"."people".* from "public"."people") and see the difference:

If connecting Metabase with a postgres directly, Metabase will open 2 cursors and fire the query once. This is the log:

2023-02-14 12:59:01.149 UTC [35816] LOG:  execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED
2023-02-14 12:59:41.637 UTC [35816] LOG:  execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
2023-02-14 12:59:41.637 UTC [35816] LOG:  execute <unnamed>: BEGIN READ ONLY
2023-02-14 12:59:41.637 UTC [35816] LOG:  execute <unnamed>/C_11: -- Metabase:: userID: 1 queryType: native queryHash: 509edd13745b85340d4631ddc5825aedc2c622f3844690176945c47bb8b2eff8
	select people.* from people
2023-02-14 12:59:41.641 UTC [35816] LOG:  execute S_2: ROLLBACK
2023-02-14 12:59:41.641 UTC [35816] LOG:  execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED
2023-02-14 12:59:41.846 UTC [35816] LOG:  execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
2023-02-14 12:59:41.846 UTC [35816] LOG:  execute <unnamed>: BEGIN READ ONLY
2023-02-14 12:59:41.847 UTC [35816] LOG:  execute <unnamed>/C_12: -- Metabase:: userID: 1 queryType: native queryHash: d499de6f09fb9e4aab3045369dca49286961b375cf905f1c69d7ef0a5624fd72
	select people.* from people
2023-02-14 12:59:41.963 UTC [35816] LOG:  execute fetch from <unnamed>/C_12: -- Metabase:: userID: 1 queryType: native queryHash: d499de6f09fb9e4aab3045369dca49286961b375cf905f1c69d7ef0a5624fd72
	select people.* from people
2023-02-14 12:59:42.066 UTC [35816] LOG:  execute fetch from <unnamed>/C_12: -- Metabase:: userID: 1 queryType: native queryHash: d499de6f09fb9e4aab3045369dca49286961b375cf905f1c69d7ef0a5624fd72
	select people.* from people
2023-02-14 12:59:42.167 UTC [35816] LOG:  execute fetch from <unnamed>/C_12: -- Metabase:: userID: 1 queryType: native queryHash: d499de6f09fb9e4aab3045369dca49286961b375cf905f1c69d7ef0a5624fd72
	select people.* from people
2023-02-14 12:59:42.297 UTC [35816] LOG:  execute S_2: ROLLBACK
2023-02-14 12:59:42.297 UTC [35816] LOG:  execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED

But if you do the same via Trino, you'll see the following log lines:
Step 2: Trino is syncing the table schema, creating the cursor AND running the question, even though in Metabase

2023-02-14 13:12:34.051 UTC [37165] LOG:  execute <unnamed>: SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME,  CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema'  WHEN true THEN CASE  WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TABLE'   WHEN 'v' THEN 'SYSTEM VIEW'   WHEN 'i' THEN 'SYSTEM INDEX'   ELSE NULL   END  WHEN n.nspname = 'pg_toast' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TOAST TABLE'   WHEN 'i' THEN 'SYSTEM TOAST INDEX'   ELSE NULL   END  ELSE CASE c.relkind   WHEN 'r' THEN 'TEMPORARY TABLE'   WHEN 'p' THEN 'TEMPORARY TABLE'   WHEN 'i' THEN 'TEMPORARY INDEX'   WHEN 'S' THEN 'TEMPORARY SEQUENCE'   WHEN 'v' THEN 'TEMPORARY VIEW'   ELSE NULL   END  END  WHEN false THEN CASE c.relkind  WHEN 'r' THEN 'TABLE'  WHEN 'p' THEN 'PARTITIONED TABLE'  WHEN 'i' THEN 'INDEX'  WHEN 'P' then 'PARTITIONED INDEX'  WHEN 'S' THEN 'SEQUENCE'  WHEN 'v' THEN 'VIEW'  WHEN 'c' THEN 'TYPE'  WHEN 'f' THEN 'FOREIGN TABLE'  WHEN 'm' THEN 'MATERIALIZED VIEW'  ELSE NULL  END  ELSE NULL  END  AS TABLE_TYPE, d.description AS REMARKS,  '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION  FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c  LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0  and d.classoid = 'pg_class'::regclass)  WHERE c.relnamespace = n.oid  AND n.nspname LIKE 'public' AND c.relname LIKE 'people' AND (false  OR ( c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'p' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'v' AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'm' )  OR ( c.relkind = 'f' ) )  ORDER BY TABLE_TYPE,TABLE_SCHEM,TABLE_NAME 
2023-02-14 13:12:34.054 UTC [37165] LOG:  execute <unnamed>: SELECT * FROM (SELECT n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) AS attnotnull,a.atttypmod,a.attlen,t.typtypmod,row_number() OVER (PARTITION BY a.attrelid ORDER BY a.attnum) AS attnum, nullif(a.attidentity, '') as attidentity,nullif(a.attgenerated, '') as attgenerated,pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype  FROM pg_catalog.pg_namespace n  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid)  JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid)  LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND a.attnum = def.adnum)  LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)  LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE c.relkind in ('r','p','v','f','m') and a.attnum > 0 AND NOT a.attisdropped  AND n.nspname LIKE 'public' AND c.relname LIKE 'people') c WHERE true  ORDER BY nspname,c.relname,attnum 
2023-02-14 13:12:34.062 UTC [37165] LOG:  execute <unnamed>: BEGIN READ ONLY
2023-02-14 13:12:34.062 UTC [37165] LOG:  execute <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:12:34.067 UTC [37165] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:12:34.072 UTC [37165] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"

Step 3:
Trino syncs again the table schema, now twice, and creates 2 cursors and also fires the query twice

2023-02-14 13:18:21.065 UTC [37412] LOG:  execute <unnamed>: SET extra_float_digits = 3
2023-02-14 13:18:21.065 UTC [37412] LOG:  execute <unnamed>: SET application_name = 'PostgreSQL JDBC Driver'
2023-02-14 13:18:21.067 UTC [37412] LOG:  execute <unnamed>: SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME,  CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema'  WHEN true THEN CASE  WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TABLE'   WHEN 'v' THEN 'SYSTEM VIEW'   WHEN 'i' THEN 'SYSTEM INDEX'   ELSE NULL   END  WHEN n.nspname = 'pg_toast' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TOAST TABLE'   WHEN 'i' THEN 'SYSTEM TOAST INDEX'   ELSE NULL   END  ELSE CASE c.relkind   WHEN 'r' THEN 'TEMPORARY TABLE'   WHEN 'p' THEN 'TEMPORARY TABLE'   WHEN 'i' THEN 'TEMPORARY INDEX'   WHEN 'S' THEN 'TEMPORARY SEQUENCE'   WHEN 'v' THEN 'TEMPORARY VIEW'   ELSE NULL   END  END  WHEN false THEN CASE c.relkind  WHEN 'r' THEN 'TABLE'  WHEN 'p' THEN 'PARTITIONED TABLE'  WHEN 'i' THEN 'INDEX'  WHEN 'P' then 'PARTITIONED INDEX'  WHEN 'S' THEN 'SEQUENCE'  WHEN 'v' THEN 'VIEW'  WHEN 'c' THEN 'TYPE'  WHEN 'f' THEN 'FOREIGN TABLE'  WHEN 'm' THEN 'MATERIALIZED VIEW'  ELSE NULL  END  ELSE NULL  END  AS TABLE_TYPE, d.description AS REMARKS,  '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION  FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c  LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0  and d.classoid = 'pg_class'::regclass)  WHERE c.relnamespace = n.oid  AND n.nspname LIKE 'public' AND c.relname LIKE 'people' AND (false  OR ( c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'p' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'v' AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'm' )  OR ( c.relkind = 'f' ) )  ORDER BY TABLE_TYPE,TABLE_SCHEM,TABLE_NAME 
2023-02-14 13:18:21.069 UTC [37412] LOG:  execute <unnamed>: SELECT * FROM (SELECT n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) AS attnotnull,a.atttypmod,a.attlen,t.typtypmod,row_number() OVER (PARTITION BY a.attrelid ORDER BY a.attnum) AS attnum, nullif(a.attidentity, '') as attidentity,nullif(a.attgenerated, '') as attgenerated,pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype  FROM pg_catalog.pg_namespace n  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid)  JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid)  LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND a.attnum = def.adnum)  LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)  LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE c.relkind in ('r','p','v','f','m') and a.attnum > 0 AND NOT a.attisdropped  AND n.nspname LIKE 'public' AND c.relname LIKE 'people') c WHERE true  ORDER BY nspname,c.relname,attnum 
2023-02-14 13:18:21.090 UTC [37412] LOG:  execute <unnamed>: BEGIN READ ONLY
2023-02-14 13:18:21.091 UTC [37412] LOG:  execute <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:18:21.096 UTC [37412] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:18:21.101 UTC [37412] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:18:21.382 UTC [37413] LOG:  execute <unnamed>: SET extra_float_digits = 3
2023-02-14 13:18:21.382 UTC [37413] LOG:  execute <unnamed>: SET application_name = 'PostgreSQL JDBC Driver'
2023-02-14 13:18:21.385 UTC [37413] LOG:  execute <unnamed>: SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME,  CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema'  WHEN true THEN CASE  WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TABLE'   WHEN 'v' THEN 'SYSTEM VIEW'   WHEN 'i' THEN 'SYSTEM INDEX'   ELSE NULL   END  WHEN n.nspname = 'pg_toast' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TOAST TABLE'   WHEN 'i' THEN 'SYSTEM TOAST INDEX'   ELSE NULL   END  ELSE CASE c.relkind   WHEN 'r' THEN 'TEMPORARY TABLE'   WHEN 'p' THEN 'TEMPORARY TABLE'   WHEN 'i' THEN 'TEMPORARY INDEX'   WHEN 'S' THEN 'TEMPORARY SEQUENCE'   WHEN 'v' THEN 'TEMPORARY VIEW'   ELSE NULL   END  END  WHEN false THEN CASE c.relkind  WHEN 'r' THEN 'TABLE'  WHEN 'p' THEN 'PARTITIONED TABLE'  WHEN 'i' THEN 'INDEX'  WHEN 'P' then 'PARTITIONED INDEX'  WHEN 'S' THEN 'SEQUENCE'  WHEN 'v' THEN 'VIEW'  WHEN 'c' THEN 'TYPE'  WHEN 'f' THEN 'FOREIGN TABLE'  WHEN 'm' THEN 'MATERIALIZED VIEW'  ELSE NULL  END  ELSE NULL  END  AS TABLE_TYPE, d.description AS REMARKS,  '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION  FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c  LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0  and d.classoid = 'pg_class'::regclass)  WHERE c.relnamespace = n.oid  AND n.nspname LIKE 'public' AND c.relname LIKE 'people' AND (false  OR ( c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'p' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'v' AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' )  OR ( c.relkind = 'm' )  OR ( c.relkind = 'f' ) )  ORDER BY TABLE_TYPE,TABLE_SCHEM,TABLE_NAME 
2023-02-14 13:18:21.387 UTC [37413] LOG:  execute <unnamed>: SELECT * FROM (SELECT n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) AS attnotnull,a.atttypmod,a.attlen,t.typtypmod,row_number() OVER (PARTITION BY a.attrelid ORDER BY a.attnum) AS attnum, nullif(a.attidentity, '') as attidentity,nullif(a.attgenerated, '') as attgenerated,pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype  FROM pg_catalog.pg_namespace n  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid)  JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid)  LEFT JOIN pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND a.attnum = def.adnum)  LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)  LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE c.relkind in ('r','p','v','f','m') and a.attnum > 0 AND NOT a.attisdropped  AND n.nspname LIKE 'public' AND c.relname LIKE 'people') c WHERE true  ORDER BY nspname,c.relname,attnum 
2023-02-14 13:18:21.394 UTC [37413] LOG:  execute <unnamed>: BEGIN READ ONLY
2023-02-14 13:18:21.394 UTC [37413] LOG:  execute <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:18:21.399 UTC [37413] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"
2023-02-14 13:18:21.404 UTC [37413] LOG:  execute fetch from <unnamed>/C_1: SELECT "id", "address", "email", "password", "name", "city", "longitude", "state", "source", "birth_date", "zip", "latitude", "created_at" FROM "public"."people"

problem with add new db

when i add new db to metabase (v 0.45.2.1) then i get an error "No method in multimethod 'connection-details->spec' for dispatch value: :starburst"

metabase-driver v 3.0.1

Bundling the driver in Metabase OSS fails

There's an error when you bundle the latest driver in the OSS version of the product:

ERROR driver.impl :: Error loading driver namespace clojure.lang.ExceptionInfo: Could not locate metabase/driver/starburst__init.class, metabase/driver/starburst.clj or metabase/driver/starburst.cljc on classpath. {:classloader #object[clojure.lang.DynamicClassLoader 0x498f5728 "clojure.lang.DynamicClassLoader@498f5728"], :classpath-urls (), :system-classpath ("/app/metabase.jar")}

If you bundle the driver in the EE version of the product, everything works

Starburst json type is erroneously mapped to :type/Text

In the following code
https://github.com/starburstdata/metabase-driver/blob/f4fe016552eed2567addb5c5f9af6edaa5322a59/drivers/starburst/src/metabase/driver/implementation/sync.clj#L45C40-L45C50
Starburst json type is being mapped to :type/Text. This make Metabase thinks some columns errernously as varchar and send queries like
... SUBSTRING("some_json_column", 1, 1234) "substring2494"...
which throws an exception like:
io.trino.spi.TrinoException: line 1:3439: Unexpected parameters (json, integer, integer) for function substring. Expected: substring(char(x), bigint), substring(char(x), bigint, bigint), substring(varchar(x), bigint), substring(varchar(x), bigint, bigint)

If one looks at postgres driver line 605 and 606 can see that https://github.com/metabase/metabase/blob/6fe41dd4379f4476b3eb78215b600393dd34474d/src/metabase/driver/postgres.clj#L605
and https://github.com/metabase/metabase/blob/6fe41dd4379f4476b3eb78215b600393dd34474d/src/metabase/driver/postgres.clj#L606

:json :type/JSON
:jsonb :type/JSON

Could you please correct json fields as :type/JSON

Driver scans table schema on every query run

Separating #71 on 2 issues for convenience.

If you see the logs, the driver is syncing the table schema on every query run. This doesn't happen when connecting the DB directly without Trino

Add support for Trino's EXECUTE IMMEDIATE

Starting with version 418, Trino introduced a new EXECUTE IMMEDIATE statement which allows to perform a prepared statement using a single Trino call.

Allow to use that capability in Metabase when defining a database.

Problem with ubuntu

when i create an docker compose file like this:

version: '3'
services:
  metabase:
    image: metabase/metabase:latest
    ports:
      - "3000:3000"
    volumes:
      - data:/metabase
      - ./metabase/plugins:/plugins
volumes:
  data:

It work in windows but I can not see straburst in the database list when I run it in ubuntu.
How can I solve this problem?

Can not query if we call a parameter in commented query

When running the following query:

--     SELECT *    
--     FROM    
--         table_a
--     WHERE
--        date = {{period}}

SELECT *    
FROM    
table_b
WHERE
date = {{period}}

I got an error saying:

line 1:1: Incorrect number of parameters: expected 1 but found 3

Even though when we run it using presto driver, the query is running successfully.

If we delete the line:

--        date = {{period}}

from the query, it will run successfully in the Starburst driver. another case when adding parameters in the comment section of the query such as:

--     SELECT *    
--     FROM    
--         table_a
--     WHERE
--        date = {{period}}
--        and id = {{ID}}

SELECT *    
FROM    
table_b
WHERE
date = {{period}}

it will return with error:
line 1:1: Incorrect number of parameters: expected 1 but found 5

It seems that the driver counts each parameter in the comment section as 2 parameters.

We use Starburst driver version 2.0.1 and Metabase version v0.45.2.1. Is it expected or is there any release that fixes this issue?
Thanks in advance!

Cannot connect to PrestoDB

I'm using "starburst-4.1.0.metabase-driver.jar" and want to connect to PrestoDB (0.283), the connection is not successful. The following is an error displays on UI:

"Error executing query: Error starting query at http://xx.xx.xx.xx:9188/v1/statement returned an invalid response: JsonResponse{statusCode=400, headers={content-length=[16], content-type=[text/plain], date=[Tue, 26 Mar 2024 04:41:14 GMT]}, hasValue=false} [Error: User must be set]"

For error in Metabase (v0.48.2), please refer to attached file.

From release document (https://github.com/starburstdata/metabase-driver/releases), I only see you built for Enterprise. Could you please give more information about Metabase open source and how about my bug?

Thanks,
Phuong
metabase.log

Connect to Trino via certificates

Right now you can't connect to Trino with certificates. A customer is asking to connect to their Trino cluster with certificates that need to be saved in the local store

Add feature to see all trino catalog in metabase catalog database connection

When adding a Trino cluster connection to metabase with starburst driver, you have to choose one catalog which limits Trino functionality to filter dashboards and queries to one catalog and you can't use another catalog data (which you joined in the query with the default catalog) to filter and choose variables.

Error compiling :field clause: metabase/util/honeysql_extensions/TypedHoneySQL - Starburst driver

Describe the bug

  • When using the Starburst driver, using a simple group by and sum aggregate produces an error:
    Error compiling :field clause: metabase/util/honeysql_extensions/TypedHoneySQL

image

To Reproduce

  1. Clone this repo

  2. Edit docker-compose.yaml to replace image: paoliniluis/qa-databases:mongo-sample-5.0 with metabase/qa-databases:mongo-sample-5

  3. docker compose up -d

  4. Login to the Metabase instance (username: [email protected], password: metabot1)

  5. Create a new question from the trino-server-postgres database on the Reviews table as Sum of Rating grouped by Created at: Month
    image

  6. See long-form error message in Admin > Troubleshooting > Logs

Expected behavior

  • Expect query to aggregate and group without errors

Logs

[035cc64c-dc9a-47f3-9ca4-f60d44bf96d2] 2023-05-10T13:17:34-04:00 WARN metabase.driver.sql.query-processor.deprecated Warning: Driver :starburst is using Honey SQL 1. This method was deprecated in 0.46.0 and will be removed in a future release.
[035cc64c-dc9a-47f3-9ca4-f60d44bf96d2] 2023-05-10T13:17:34-04:00 ERROR metabase.server.middleware.log POST /api/dataset/native 500 39.8 ms (17 DB calls) 
{:via
 [{:type clojure.lang.ExceptionInfo,
   :message "Error compiling :field clause: metabase/util/honeysql_extensions/TypedHoneySQL",
   :data
   {:clause
    [:field
     307
     {:temporal-unit :month,
      :metabase.query-processor.util.add-alias-info/source-table 29,
      :metabase.query-processor.util.add-alias-info/source-alias "created_at",
      :metabase.query-processor.util.add-alias-info/desired-alias "created_at",
      :metabase.query-processor.util.add-alias-info/position 0}]},
   :at [metabase.driver.sql.query_processor$fn__62821 invokeStatic "query_processor.clj" 659]}
  {:type java.lang.NoClassDefFoundError,
   :message "metabase/util/honeysql_extensions/TypedHoneySQL",
   :at [metabase.driver.implementation.query_processor$in_report_zone invokeStatic "query_processor.clj" 97]}],
 :trace
 [[metabase.driver.implementation.query_processor$in_report_zone invokeStatic "query_processor.clj" 97]
  [metabase.driver.implementation.query_processor$in_report_zone invoke "query_processor.clj" 91]
  [metabase.driver.implementation.query_processor$fn__81969 invokeStatic "query_processor.clj" 161]
  [metabase.driver.implementation.query_processor$fn__81969 invoke "query_processor.clj" 159]
  [clojure.lang.MultiFn invoke "MultiFn.java" 239]
  [metabase.driver.sql.query_processor$apply_temporal_bucketing invokeStatic "query_processor.clj" 582]
  [metabase.driver.sql.query_processor$apply_temporal_bucketing invoke "query_processor.clj" 578]
  [metabase.driver.sql.query_processor$fn__62821 invokeStatic "query_processor.clj" 653]
  [metabase.driver.sql.query_processor$fn__62821 invoke "query_processor.clj" 627]
  [clojure.lang.MultiFn invoke "MultiFn.java" 234]
  [metabase.driver.sql.query_processor$as invokeStatic "query_processor.clj" 984]
  [metabase.driver.sql.query_processor$as doInvoke "query_processor.clj" 953]
  [clojure.lang.RestFn invoke "RestFn.java" 425]
  [metabase.driver.sql.query_processor$fn__63254$fn__63256 invoke "query_processor.clj" 1046]
  [clojure.core$mapv$fn__8535 invoke "core.clj" 6979]
  [clojure.lang.ArrayChunk reduce "ArrayChunk.java" 58]
  [clojure.core.protocols$fn__8244 invokeStatic "protocols.clj" 136]
  [clojure.core.protocols$fn__8244 invoke "protocols.clj" 124]
  [clojure.core.protocols$fn__8204$G__8199__8213 invoke "protocols.clj" 19]
  [clojure.core.protocols$seq_reduce invokeStatic "protocols.clj" 31]
  [clojure.core.protocols$fn__8236 invokeStatic "protocols.clj" 75]
  [clojure.core.protocols$fn__8236 invoke "protocols.clj" 75]
  [clojure.core.protocols$fn__8178$G__8173__8191 invoke "protocols.clj" 13]
  [clojure.core$reduce invokeStatic "core.clj" 6886]
  [clojure.core$mapv invokeStatic "core.clj" 6970]
  [clojure.core$mapv invoke "core.clj" 6970]
  [metabase.driver.sql.query_processor$fn__63254 invokeStatic "query_processor.clj" 1045]
  [metabase.driver.sql.query_processor$fn__63254 invoke "query_processor.clj" 1040]
  [clojure.lang.MultiFn invoke "MultiFn.java" 244]
  [metabase.driver.sql.query_processor$apply_top_level_clauses$fn__63538 invoke "query_processor.clj" 1372]
  [clojure.lang.ArraySeq reduce "ArraySeq.java" 119]
  [clojure.core$reduce invokeStatic "core.clj" 6885]
  [clojure.core$reduce invoke "core.clj" 6868]
  [metabase.driver.sql.query_processor$apply_top_level_clauses invokeStatic "query_processor.clj" 1370]
  [metabase.driver.sql.query_processor$apply_top_level_clauses invoke "query_processor.clj" 1366]
  [metabase.driver.sql.query_processor$apply_clauses invokeStatic "query_processor.clj" 1410]
  [metabase.driver.sql.query_processor$apply_clauses invoke "query_processor.clj" 1400]
  [metabase.driver.sql.query_processor$mbql__GT_honeysql invokeStatic "query_processor.clj" 1433]
  [metabase.driver.sql.query_processor$mbql__GT_honeysql invoke "query_processor.clj" 1424]
  [metabase.driver.sql.query_processor$mbql__GT_native invokeStatic "query_processor.clj" 1442]
  [metabase.driver.sql.query_processor$mbql__GT_native invoke "query_processor.clj" 1438]
  [metabase.driver.sql$fn__103884 invokeStatic "sql.clj" 42]
  [metabase.driver.sql$fn__103884 invoke "sql.clj" 40]
  [clojure.lang.MultiFn invoke "MultiFn.java" 234]
  [metabase.query_processor.middleware.mbql_to_native$query__GT_native_form invokeStatic "mbql_to_native.clj" 14]
  [metabase.query_processor.middleware.mbql_to_native$query__GT_native_form invoke "mbql_to_native.clj" 9]
  [metabase.query_processor$compile$fn__70719 invoke "query_processor.clj" 316]
  [metabase.query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083$fn__69088
   invoke
   "resolve_database_and_driver.clj"
   36]
  [metabase.driver$do_with_driver invokeStatic "driver.clj" 90]
  [metabase.driver$do_with_driver invoke "driver.clj" 86]
  [metabase.query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083
   invoke
   "resolve_database_and_driver.clj"
   35]
  [metabase.query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__64953
   invoke
   "fetch_source_query.clj"
   310]
  [metabase.query_processor.middleware.store$initialize_store$fn__65131$fn__65132 invoke "store.clj" 12]
  [metabase.query_processor.store$do_with_store invokeStatic "store.clj" 47]
  [metabase.query_processor.store$do_with_store invoke "store.clj" 41]
  [metabase.query_processor.middleware.store$initialize_store$fn__65131 invoke "store.clj" 11]
  [metabase.query_processor.middleware.normalize_query$normalize$fn__69372 invoke "normalize_query.clj" 25]
  [metabase_enterprise.audit_app.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__75977
   invoke
   "handle_audit_queries.clj"
   131]
  [metabase.query_processor.middleware.prevent_infinite_recursive_preprocesses$prevent_infinite_recursive_preprocesses$fn__66703
   invoke
   "prevent_infinite_recursive_preprocesses.clj"
   23]
  [metabase.query_processor$compile invokeStatic "query_processor.clj" 317]
  [metabase.query_processor$compile invoke "query_processor.clj" 308]
  [metabase.query_processor$compile_and_splice_parameters invokeStatic "query_processor.clj" 329]
  [metabase.query_processor$compile_and_splice_parameters invoke "query_processor.clj" 319]
  [metabase.api.dataset$fn__87382$fn__87384 invoke "dataset.clj" 163]
  [metabase.api.dataset$fn__87382 invokeStatic "dataset.clj" 161]
  [metabase.api.dataset$fn__87382 invoke "dataset.clj" 158]
  [compojure.core$wrap_response$fn__42375 invoke "core.clj" 160]
  [compojure.core$wrap_route_middleware$fn__42359 invoke "core.clj" 132]
  [compojure.core$wrap_route_info$fn__42364 invoke "core.clj" 139]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 151]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 152]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 152]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 152]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 152]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [metabase.server.middleware.auth$enforce_authentication$fn__87427 invoke "auth.clj" 17]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [compojure.core$make_context$handler__42415 invoke "core.clj" 290]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 300]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 199]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 199]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 199]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [metabase.api.routes$fn__99241$fn__99242 invoke "routes.clj" 62]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [clojure.lang.AFn applyToHelper "AFn.java" 160]
  [clojure.lang.AFn applyTo "AFn.java" 144]
  [clojure.core$apply invokeStatic "core.clj" 667]
  [clojure.core$apply invoke "core.clj" 662]
  [metabase.server.routes$fn__99402$fn__99403 doInvoke "routes.clj" 70]
  [clojure.lang.RestFn invoke "RestFn.java" 436]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [compojure.core$make_context$handler__42415 invoke "core.clj" 290]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 300]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 153]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 153]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$wrap_route_matches$fn__42368 invoke "core.clj" 153]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 199]
  [compojure.core$routes$fn__42387$f__42388$respond_SINGLEQUOTE___42389 invoke "core.clj" 197]
  [compojure.core$make_context$fn__42419 invoke "core.clj" 301]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [compojure.core$routes$fn__42387$f__42388 invoke "core.clj" 198]
  [compojure.core$routes$fn__42387 invoke "core.clj" 200]
  [metabase.server.middleware.exceptions$catch_uncaught_exceptions$fn__95938 invoke "exceptions.clj" 103]
  [metabase.server.middleware.exceptions$catch_api_exceptions$fn__95935 invoke "exceptions.clj" 91]
  [metabase.server.middleware.log$log_api_call$fn__102699$fn__102700$fn__102701 invoke "log.clj" 216]
  [metabase.driver.sql_jdbc.execute.diagnostic$do_with_diagnostic_info invokeStatic "diagnostic.clj" 18]
  [metabase.driver.sql_jdbc.execute.diagnostic$do_with_diagnostic_info invoke "diagnostic.clj" 12]
  [metabase.server.middleware.log$log_api_call$fn__102699$fn__102700 invoke "log.clj" 208]
  [toucan2.execute$do_with_call_counts invokeStatic "execute.clj" 112]
  [toucan2.execute$do_with_call_counts invoke "execute.clj" 103]
  [metabase.server.middleware.log$log_api_call$fn__102699 invoke "log.clj" 207]
  [metabase.server.middleware.browser_cookie$ensure_browser_id_cookie$fn__109422 invoke "browser_cookie.clj" 42]
  [metabase.server.middleware.security$add_security_headers$fn__80256 invoke "security.clj" 153]
  [metabase.server.middleware.json$wrap_json_body$fn__106674 invoke "json.clj" 67]
  [metabase.server.middleware.offset_paging$handle_paging$fn__80280 invoke "offset_paging.clj" 45]
  [metabase.server.middleware.json$wrap_streamed_json_response$fn__106692 invoke "json.clj" 103]
  [ring.middleware.keyword_params$wrap_keyword_params$fn__109689 invoke "keyword_params.clj" 55]
  [ring.middleware.params$wrap_params$fn__109708 invoke "params.clj" 77]
  [metabase.server.middleware.misc$maybe_set_site_url$fn__50479 invoke "misc.clj" 61]
  [metabase.server.middleware.session$reset_session_timeout$fn__63770 invoke "session.clj" 430]
  [metabase.server.middleware.session$bind_current_user$fn__63740$fn__63741 invoke "session.clj" 336]
  [metabase.server.middleware.session$do_with_current_user invokeStatic "session.clj" 315]
  [metabase.server.middleware.session$do_with_current_user invoke "session.clj" 304]
  [metabase.server.middleware.session$bind_current_user$fn__63740 invoke "session.clj" 335]
  [metabase.server.middleware.session$wrap_current_user_info$fn__63722 invoke "session.clj" 285]
  [metabase.server.middleware.session$wrap_session_id$fn__63705 invoke "session.clj" 217]
  [metabase.server.middleware.auth$wrap_api_key$fn__87435 invoke "auth.clj" 30]
  [ring.middleware.cookies$wrap_cookies$fn__109609 invoke "cookies.clj" 216]
  [metabase.server.middleware.misc$add_content_type$fn__50461 invoke "misc.clj" 29]
  [metabase.server.middleware.misc$disable_streaming_buffering$fn__50487 invoke "misc.clj" 78]
  [ring.middleware.gzip$wrap_gzip$fn__109651 invoke "gzip.clj" 86]
  [metabase.server.middleware.misc$bind_request$fn__50490 invoke "misc.clj" 95]
  [metabase.server.middleware.ssl$redirect_to_https_middleware$fn__109438 invoke "ssl.clj" 41]
  [metabase.server$async_proxy_handler$fn__101465 invoke "server.clj" 77]
  [metabase.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a handle nil -1]
  [org.eclipse.jetty.server.handler.StatisticsHandler handle "StatisticsHandler.java" 173]
  [org.eclipse.jetty.server.handler.HandlerWrapper handle "HandlerWrapper.java" 122]
  [org.eclipse.jetty.server.Server handle "Server.java" 563]
  [org.eclipse.jetty.server.HttpChannel lambda$handle$0 "HttpChannel.java" 505]
  [org.eclipse.jetty.server.HttpChannel dispatch "HttpChannel.java" 762]
  [org.eclipse.jetty.server.HttpChannel handle "HttpChannel.java" 497]
  [org.eclipse.jetty.server.HttpConnection onFillable "HttpConnection.java" 282]
  [org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded "AbstractConnection.java" 314]
  [org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 100]
  [org.eclipse.jetty.io.SelectableChannelEndPoint$1 run "SelectableChannelEndPoint.java" 53]
  [org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy runTask "AdaptiveExecutionStrategy.java" 416]
  [org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy consumeTask "AdaptiveExecutionStrategy.java" 385]
  [org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy tryProduce "AdaptiveExecutionStrategy.java" 272]
  [org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy lambda$new$0 "AdaptiveExecutionStrategy.java" 140]
  [org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread run "ReservedThreadExecutor.java" 411]
  [org.eclipse.jetty.util.thread.QueuedThreadPool runJob "QueuedThreadPool.java" 936]
  [org.eclipse.jetty.util.thread.QueuedThreadPool$Runner run "QueuedThreadPool.java" 1080]
  [java.lang.Thread run nil -1]],
 :cause "metabase/util/honeysql_extensions/TypedHoneySQL",
 :message "Error compiling :field clause: metabase/util/honeysql_extensions/TypedHoneySQL",
 :clause
 [:field
  307
  {:temporal-unit :month,
   :metabase.query-processor.util.add-alias-info/source-table 29,
   :metabase.query-processor.util.add-alias-info/source-alias "created_at",
   :metabase.query-processor.util.add-alias-info/desired-alias "created_at",
   :metabase.query-processor.util.add-alias-info/position 0}]}

Information about your Metabase installation

  • Metabase version: v1.46.2
  • Database: Trino Server with Postgres (Starburst driver 2.0.1)

Additional context

Nested fields support

I am using driver version 1.0.3 together with Trino 385, metabase 0.43 and delta. In delta we have a table with fairly nested schema. E.g. there's a column "device" with "os_type" field inside. In metabase, these columns end up with non-nested types.

The highlighted columns on the screenshots are both complex structures, not boolean or ints.

image

Is this something to be addressed on the driver side? From the metabase issues, such as metabase/metabase#3173, it appears that it does not have perfect support for nested schemas, but at least some support for mongo - so the first step is the driver support.

Starburst connector impersonation based on another field than email field

Hello ๐Ÿ™‚

We opened an issue in metabase github but maybe it's more here that we can find some help !

Here is our problem:

Is your feature request related to a problem? Please describe.
We are benchmarking Metabase for our next BI tool. We are making some test trying to use Metabase with Starburst Trino connector. As authentication system, we use LDAP. Actually, if we want to impersonate queries, Metabase use by default user mail field filled by LDAP when new user sign-on.

We take a look inside Metabase code and found that function that confirm that email user field is used by default (here):

(defn impersonate-user
  [conn]
  (if
    (clojure.string/includes? (.getProperty (.getClientInfo conn) "ClientInfo" "") "impersonate:true")
    (let [email (get (deref api/*current-user*) :email)]
      (.setSessionUser (.unwrap conn TrinoConnection) email))
    nil))

Describe the solution you'd like
We think that it could be really nice to choose which field we want to use between user fields (first name, last name and email).

Describe alternatives you've considered
We don't really have alternative as our Trino authentication is based on another field than email. For the moment, we are not going to use impersonation which is a problem for us because we can't manage data access very well using Trino user rules..

How important is this feature to you?
To be honest, this feature is really important for us because without this feature, we can't go futher using Metabase in production while we think that it's a great tool for our needs.

Additional context
Metabase version: v0.49.3
Starburst driver version: 4.1.0
Trino version: 439

Thanks for your help ๐Ÿ™‚

It might be possible to propagate UserId in the EXECUTE

Driver version :- 1.0.6

I have a dashboard with filter using that driver. Metabase sends two commands to Trino:

1- PREPARE statement1 FROM -- Metabase:: userID: ....
[SQL]
2- EXECUTE statement1 USING [ARGS]

In the first command (prepare) there is a parameter that allows to identify the user (userID)

Example: -- Metabase:: userID: 73 queryType: native queryHash: c21cbb35367f5676d7832f7565296b9e91e2dbe0bf16e9bbde7893843d66f383

It allows to debug, audit and make stats per users.

The execute command, the userID\queryHash is lost.

It might be possible to propagate that information in the EXECUTE call too?

Another alternative could be a setting to disable prepared statements.

Add ability to sync multiple catalogs in Database

Overview

As a user, I want to be able to sync multiple catalogs when creating a DB with the Starburst driver. I want to be able to specficy a list of catalogs, and be able to view all schemas/tables within those catalogs with the single metabase DB

Success Criteria

  • Ability to specify multiple catalogs during DB creation
  • Ability to sync all those catalog's schemas
  • Ability to view, join, and query, ask questions, with all tables across different catalogs in metabase

Open questions

  • how would we handle the schema field if a user passes in a set of catalogs?
    • likely we make these mutually exclusive options

Allow users to override the source value

Currently the default value of source value is set to Starburst Metabase 1.0.6 https://github.com/starburstdata/metabase-driver/blob/main/drivers/starburst/src/metabase/driver/implementation/connectivity.clj#L106

When I tried to change this value using advance options from Metabase UI to something else I got this error

Caused by: java.sql.SQLException: Connection property 'source' is both in the URL and an argument
	at io.trino.jdbc.TrinoDriverUri.mergeConnectionProperties(TrinoDriverUri.java:464)
	at io.trino.jdbc.TrinoDriverUri.<init>(TrinoDriverUri.java:130)
	at io.trino.jdbc.TrinoDriverUri.<init>(TrinoDriverUri.java:122)
	at io.trino.jdbc.TrinoDriverUri.create(TrinoDriverUri.java:143)

It would be very useful if we could allow users to change the value but keep the default value to Starburst Metabase 1.0.6

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.