pacman82 / odbc-api Goto Github PK
View Code? Open in Web Editor NEWODBC (Open Database Connectivity) bindings for Rust.
License: MIT License
ODBC (Open Database Connectivity) bindings for Rust.
License: MIT License
It might be nice to be able to use a SQL file for more complex queries, instead of trying to type or paste into the command line.
Thoughts?
I want to pass a variable length input as parameter but I can't do it since InputParameter isn't implemented for VarChar<Vec<u8>>
. Also VarChar512::copy_from_bytes()
fails to compile because [u8; 512]
doesn't implement std::default::Default
.
Root cause being the large column sizes reported by the driver.
The functions currently defined in the integration tests could be lifted to the main crate, so that the user can create a ColumnDefinition
(it contains a Vec<SqlChar>
that needs some care in serializing to)
#[cfg(feature = "narrow")]
pub fn utf8_to_vec_char(text: &str) -> Vec<u8> {
text.to_owned().into_bytes()
}
#[cfg(not(feature = "narrow"))]
pub fn utf8_to_vec_char(text: &str) -> Vec<u16> {
U16String::from_str(text).into_vec()
}
Hi. First of all, thank you for your great work with both odbc-safe
and odbc-api
.
After having used odbc-rs
for a while (which has not seen an update in some time), seeing this new approach is quite refreshing. I specially like the fact that now the bind_parameters is not as lifetime bound as in odbc-rs
.
One thing that's preventing us from starting to experiment with odbc-api
is the lack of support for BLOB types (Vec<u8>
). I've read in the README that an expected feature is Support for streaming binary data, which is going to be very useful with some databases that require it.
However, will the support for BLOBs arrive with that feature, or is the implementation of InputParameter
for Vec<u8>
to be expected soon?
Thanks a lot!
It seems that it requires the driver to provide incorrect information on them, but I am not sufficiently familiar with the code base (yet ^^)
Currently, I just find prepare function
odbc-api/odbc-api/src/guide.rs
Lines 481 to 505 in 4770340
If prepare can be executed, binding parameters are required,
Does support step-by-step execution now?
prepare fooplan
> PREPARE fooplan (int, text, bool, numeric) AS INSERT INTO foo VALUES($1, $2, $3, $4);
> EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);
execute fooplan
, because the parameters are obtained later, not immediatelyref link
Call Stored Procedures not modify Output Parameters and Return and not fn get
https://docs.microsoft.com/en-us/sql/relational-databases/native-client-odbc-how-to/running-stored-procedures-process-return-codes-and-output-parameters?view=sql-server-ver15
In the context of this crate, a potential source of unsoundness is reading from uninitialized values. However, most structs in this crate do initialize the data before presenting it to driver via Cdata
's pointers. Therefore, there is no need for these interfaces to be marked as unsafe.
The ODBC may not write any data to the pointer, but because it can't re-allocate them, the regions are always initialized.
Examples of APIs that do not need unsafe
:
ColumnWithIndicator::iter
TextColumn::indicator_at
and TextColumn::value_at
The situation where these APIs would need to be unsafe
is if they would be initialized with Vec::with_capacity
, filled with ODBC, and then a call of Vec::set_len
would be made in order to avoid the initialization during allocation.
Even in that case, I do not think that they would be unsafe because we would still need to know if the ODBC wrote something to them in order to correctly set_len
on them (or we would be exposed via a faulty ODBC implementation).
I just find this function,but current odbc-api
have no expose transaction status .
how to known current Connection is in transaction?
I'm trying to share connection between threads for executing callbacks of different queries.
Although, scoped thread is used to prevent 'static
lifetime requirement for spawning.
However, scoped thread is not able to be spawned, which capture the reference of the connection.
Because, neither Connection
nor Cursor
is not Send
marked.
The problem is Can connection handle multi queries concurrently?
The following code works as expected.
let env = Envirnoment::new().unwrap();
let conn = env.connect_with_connection_string(&conn_str).unwrap();
let r1 = conn.execute(&query1).unwrap();
let r2 = conn.execute(&query2).unwrap();
consume(r1);
consume(r2);
My expected concurrent code as below, so that consuming results from different queries with the same connection can be executed concurrently.
crossbeam_tuils::thread::scope(|scope|{
let env = Envirnoment::new().unwrap();
let conn = env.connect_with_connection_string(&conn_str).unwrap();
{
let conn = &conn;
scope.spawn(move |_| {
let r = conn.execute(&query1).unwrap();
consume(r);
});
}
{
let conn = &conn;
scope.spawn(move |_| {
let r = conn.execute(&query2).unwrap();
consume(r);
});
}
};
Hello. I've tried example from guide to make simple query for SAP sqlanywhere db with odbc source.
but get error message from odbc driver
Hello, world!
row set1
Error: Diagnostics(State: HY024, Native error: 0, Message: [SAP][ODBC Driver]Invalid attribute value)
odbc.ini:
[sybase]
Driver=/opt/sqlanywhere17/lib64/libdbodbc17_r.so
DatabaseName=demo
ServerName=local
CommLinks=tcpip(host=localhost;port=2638)
CharSet=UTF-8
code example:
use odbc_api::{buffers::TextRowSet, Cursor, Environment};
use std::error::Error;
const BATCH_SIZE: u32 = 100_000;
fn main() -> Result<(), Box<dyn Error>> {
println!("Hello, world!");
let environment = unsafe { Environment::new() }?;
let conn = environment.connect("sybase", "***", "***")?;
match conn.execute("select current date", ())? {
Some(cursor) => {
let mut buffers = TextRowSet::for_cursor(BATCH_SIZE, &cursor, Some(4096))?;
println!("row set1");
let mut row_set_cursor = cursor.bind_buffer(&mut buffers)?;
println!("row set2");
while let Some(batch) = row_set_cursor.fetch()? {
for row_index in 0..batch.num_rows() {
let rec = (0..batch.num_cols())
.map(|col_index| batch.at(col_index, row_index).unwrap_or(&[]));
println!("{:#?}", rec);
}
}
}
None => {
eprintln!("Query is empty.");
}
}
Ok(())
}
crate odbc
is working with this odbc driver, but it's seems to be abandoned.
Hint:
Implement CData
, Input
and Parameter
for CStr
. Indicator pointer can be pointer to NTSL
.
Queries can already be executed asynchronously, but opening the connection is still blocking.
The driver I'm trying to use only works with libiodbc.
I was able to use iodbc with odbc-rs
so I don't know if anything special is needed.
for users that do not need custom buffers, it could make sense to offer the following helper:
pub fn buffer_from_result_set(
resut_set_metadata: &impl ResultSetMetadata,
max_batch_size: usize,
) -> Result<ColumnarBuffer<AnyColumnBuffer>> {
let num_cols: u16 = resut_set_metadata.num_result_cols()? as u16;
let descs = (0..num_cols).map(|index| {
let mut column_description = ColumnDescription::default();
resut_set_metadata
.describe_col(index + 1, &mut column_description)?;
Ok(BufferDescription {
nullable: column_description.could_be_nullable(),
kind: BufferKind::from_data_type(column_description.data_type)?,
})
});
Ok(buffer_from_description(max_batch_size, descs))
}
to be used with prepared statements as follows:
let max_batch_size = 100;
let query = format!("SELECT a FROM {table_name} ORDER BY id");
let mut prepared = connection.prepare(&query)?;
let buffer = buffer_from_result_set(&prepated, max_batch_size);
let cursor = prepared.execute(()).unwrap().unwrap();
let mut cursor = cursor.bind_buffer(buffer)?;
Hint: See VarChar
implementation. It should only differs in SQL Data Type
This is due to the fact that as of version 0.22.1
VarCharSlice::NULL
is bound as a VARCHAR(0)
.
A likely fix is to bind it as VARCHAR(1)
.
See discussion #61 .
Using the weird HFSQL database with iodbc.
The query seems to work when I'm using HFSQL's UI thing.
backtrace
#0 0x00007ffff7dc6bc7 in __pthread_kill_implementation () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#1 0x00007ffff7d79b46 in raise () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#2 0x00007ffff7d644b5 in abort () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#3 0x00007ffff7dbaa60 in __libc_message () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#4 0x00007ffff7dd0b9a in malloc_printerr () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#5 0x00007ffff7dd26ec in _int_free () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#6 0x00007ffff7dd5273 in free () from /nix/store/fz54faknl123dimzz6jsppw193lx2mip-glibc-2.35-163/lib/libc.so.6
No symbol table info available.
#7 0x00007ffff7f7f55a in SQLGetDiagRec_Internal () from /nix/store/1larqcdmwr5lyk38l7fvrakqa3lz14gr-libiodbc-3.52.15/lib/libiodbc.so.2
No symbol table info available.
#8 0x00007ffff7f7ff1e in SQLGetDiagRec () from /nix/store/1larqcdmwr5lyk38l7fvrakqa3lz14gr-libiodbc-3.52.15/lib/libiodbc.so.2
No symbol table info available.
#9 0x0000555555586e63 in odbc_api::handles::diagnostics::{impl#1}::diagnostic_record<odbc_api::handles::statement::StatementRef> (self=0x7fffffff3448, rec_number=1, message_text=&mut [u8](size=0))
at src/handles/diagnostics.rs:186
native_error = 0
state = [0, 0, 0, 0, 0, 0]
text_length = 1376
#10 0x0000555555586cdb in odbc_api::handles::diagnostics::Diagnostics::diagnostic_record_vec<odbc_api::handles::statement::StatementRef> (self=0x7fffffff3448, rec_number=1, message_text=0x7fffffff3280)
at src/handles/diagnostics.rs:138
cap = 0
#11 0x00005555555857bc in odbc_api::handles::diagnostics::Record::fill_from<odbc_api::handles::statement::StatementRef> (self=0x7fffffff3280, handle=0x7fffffff3448, record_number=1)
at src/handles/diagnostics.rs:235
No locals.
#12 0x00005555555883d7 in odbc_api::handles::sql_result::SqlResult<bool>::into_result_with_trunaction_check<bool, odbc_api::handles::statement::StatementRef> (self=..., handle=0x7fffffff3448,
error_for_truncation=false) at src/error.rs:201
record = odbc_api::handles::diagnostics::Record {state: odbc_api::handles::diagnostics::State ([0, 0, 0, 0, 0]), native_error: 0, message: Vec(size=0)}
function = "SQLExecDirect"
#13 0x0000555555574a0d in odbc_api::handles::sql_result::SqlResult<bool>::into_result<bool, odbc_api::handles::statement::StatementRef> (self=..., handle=0x7fffffff3448)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.49.0/src/error.rs:172
error_for_truncation = false
#14 0x000055555556bd2b in odbc_api::execute::execute<odbc_api::handles::statement::StatementImpl> (statement=..., query=...)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.49.0/src/execute.rs:97
sql = 0x7fffffff3e20
stmt = odbc_api::handles::statement::StatementRef {parent: core::marker::PhantomData<&*mut odbc_sys::Dbc>, handle: 0x5555556201e0}
#15 0x000055555556baa7 in odbc_api::execute::execute_with_parameters<odbc_api::handles::statement::StatementImpl, odbc_api::connection::{impl#1}::execute::{closure_env#0}<()>, ()> (lazy_statement=..., query=...,
params=()) at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.49.0/src/execute.rs:31
statement = odbc_api::handles::statement::StatementImpl {parent: core::marker::PhantomData<&*mut odbc_sys::Dbc>, handle: 0x5555556201e0}
#16 0x0000555555567f4e in odbc_api::connection::Connection::execute<()> (self=0x7fffffff4788, query="SELECT 1;", params=())
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.49.0/src/connection.rs:115
lazy_statement = odbc_api::connection::{impl#1}::execute::{closure_env#0}<()> {self: 0x7fffffff4788}
query = odbc_api::handles::sql_char::SqlText {text: "SELECT 1;"}
odbc-api = { version = "0.49.0", features = [ "iodbc" ] }
Note that will the old odbc
crate I get:
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: FromBytesWithNulError { kind: InteriorNul(4) }', /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-0.16.1/src/diagnostics.rs:55:60
Currently this displays the end user a panic, and even points to a line in the main. The actual error is hidden somewhere in between. It would be nicer to present the user with a straigt error message without the backtrace.
No code changes, so the environment must have changed. Need to fix this.
thread '<unnamed>' panicked at 'range end index 10084237 out of range for slice of length 10084230', ...\odbc-api-0.40.1\src\buffers\text_column.rs:230:13
Was also getting this at a lower value with a smaller insert when I was initiating TextColumn
at 0,0
It seems that there is no Implementation of OutputParameter
like:
let mut buf = odbc_sys::Timestamp {year:0, month:0, day:0, hour:0, minute:0, second:0, fraction:0 };
match row.get_data(colNo, &mut buf) {
...
Is the a different way to exttract smalldatetime / timestamp and datetime2?
Hello. Today i tried to swtich to 0.45 and found out that speed of odbc operations became very very slow.
the same test code run with 0.34.1:
cargo run 0,50s user 1,64s system 61% cpu 3,485 total
with 0.45:
cargo run 72,97s user 2,39s system 96% cpu 1:18,00 total
The test code:
use odbc_api::IntoParameter;
use odbc_api::{buffers::TextRowSet, Cursor, Environment};
use std::error::Error;
use std::io::{self, stdin, Read};
const BATCH_SIZE: usize = 6000;
fn main() -> Result<(), Box<dyn Error>> {
let odbc_env = Environment::new().unwrap();
let mut conn = odbc_env.connect("database", "user", "pass");
match conn {
Ok(c) => {
println!("Connect success");
let mut stmt = c.preallocate().unwrap();
for i in 1..2000 {
match stmt.execute(
"select * from dba.test_table where id = ?",
&i.into_parameter(),
) {
Err(e) => eprintln!("Error execute quiery: {}", e),
Ok(None) => eprintln!("No result set"),
Ok(Some(mut cursor)) => {
let mut buf = TextRowSet::for_cursor(BATCH_SIZE, &mut cursor, Some(4096)).unwrap();
let mut rowset = cursor.bind_buffer(&mut buf).unwrap();
while let Some(batch) = rowset.fetch().unwrap() {
for row_index in 0..batch.num_rows() {
let _id = batch.at_as_str(0, row_index).unwrap();
let _name = batch.at_as_str(1, row_index).unwrap();
print!("{} -> {}", &_id.unwrap_or(""), &_name.unwrap_or(""));
println!();
}
}
}
};
}
}
Err(e) => eprintln!("cant connect to ODBC, {}", e),
}
Ok(())
}
Contrary to MSSQL, Maria DB does not check for truncation then binding parameters. This can be used to insert junk bytes beyond the valid buffer length to be inserted into Maria DB, if the value have been fetched from another ODBC data source, truncated and the same buffer is used to insert into Maria DB.
Should this wrapper check for truncation?
Ah.... I was testing my batch insert and it didn't do anything.....
No errors.
Inserting one by one works.
I have a use case where it would be convenient to use multiple active result sets (MARS) for lazily loaded data (e..g. I'd like to avoid creating my own connection pool and opening result sets on separate connections).
An easy workaround for the moment is to append it to the end of the connection string, but I was wondering how to properly handle this. ODBC allows MARS to be enabled or disabled before connecting by using SQLSetConnectAttr
with SQL_COPT_SS_MARS_ENABLED
(see https://docs.microsoft.com/en-us/sql/relational-databases/native-client-odbc-api/sqlsetconnectattr?view=sql-server-ver15#sql_copt_ss_mars_enabled).
For the existing connect calls, like
ENV.driver_connect(...);
we could expose a way to toggle MARS on the environment (globally) for all future connections. We could have something like ENV.set_multiple_active_result_sets(true)
which could be called before driver_connect
.
Does that approach sound reasonable? Or should we have some kind of connection builder, or pass connection configuration like this an argument to all connection creation functions (e.g. driver_connect
)?
I checked the code implementation of odbc-api, and odbc-api allocates memory that is known in advance. Whether to consider the way of dynamically configuring memory instead of directly memory the space of max_batch_size* length first.
When writing data in batches (say that are coming from a channel or a futures::Stream
), we may not know their exact size until we receive them.
This implies that when we initialize the buffers to bind to an INSERT
statement, any capacity
we declare when initializing the buffers upfront is either incorrect (it may error for some large batch), or must be very large (causing a large memory usage).
Given that the capacity is transmitted to the ODBC on every call (via RowSetBuffer::row_array_size
), it seems that we could allow set_num_rows
to be any number?
The alternative is to re-allocate new buffers if the batch increases. I noticed that set_num_rows
just drops old vectors and allocates new ones, so maybe that is the solution here? re-create new buffers if a batch is too large?
...
let mut cursor = self
.conn
.execute(sql, params)?
.ok_or_else(|| anyhow!("query error"))?;
let mut query_result = Self::get_cursor_columns(&mut cursor)?;
let descs = query_result
.columns
.iter()
.map(|c| <&OdbcColumn as TryInto<BufferDescription>>::try_into(c).unwrap());
let row_set_buffer = ColumnarAnyBuffer::from_description(self.max_batch_size, descs);
+ let mut row_set_cursor = cursor.bind_buffer(row_set_buffer).unwrap();
let mut total_row = vec![];
while let Some(row_set) = row_set_cursor.fetch()? {
for index in 0..query_result.columns.len() {
let column_view: AnyColumnView = row_set.column(index);
let column_types: Vec<_> = column_view.convert();
if index == 0 {
for c in column_types.into_iter() {
total_row.push(vec![c]);
}
} else {
for (col_index, c) in column_types.into_iter().enumerate() {
let row = total_row.index_mut(col_index);
row.push(c)
}
}
}
}
...
the exec result:
memory allocation of 21474836480 bytes failed
Aborted (core dumped)
odbc_api::guide shows how to query and insert perfectly.
I'd like to query table names and schema of specified table.
Maybe Connection::columns
should work as expected, is there any example for such feature?
E.g: how to get config table Description
Table "public.basic_config"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
-------------+--------------------------+-----------+----------+------------------------------------------+----------+-------------+--------------+---------------------
id | integer | | not null | nextval('basic_config_id_seq'::regclass) | plain | | | The basic_config id
center_name | character varying | | not null | | extended | | |
enable | boolean | | not null | false | plain | | |
create_time | timestamp with time zone | | not null | now() | plain | | |
Indexes:
"basic_config_pkey" PRIMARY KEY, btree (id)
Access method: heap
When using odbc-api
currently, the Text
storage content of the database is dynamic. If a certain column of text stored in the database has a length of 1500, I set Text:: max_str_len
length to 1280, and an error of content truncation will occur.
However, when I set the max_str_len length of 9999, the content of the stored column may be larger than 9999, resulting in an uncontrollable content length, and when the odbc-api reads, the text column will be truncated.
If I set the default value according to the maximum value of Text in the Pg database, there will be a problem of memory allocation failure.
So, please tell me how to dynamically control the Text::max_str_len
of odbc-api.
I took the example code from the TextRowSet
(https://docs.rs/odbc-api/0.17.0/odbc_api/buffers/struct.TextRowSet.html) for a quick test drive with a query running against a Snowflake database. I was running into OOM issues with BATCH_SIZE
set to the default 1000, but not at 100 or 10. My laptop has plenty of RAM (48GB) so I was a bit surprised a pre-allocation for 1000 rows of data could trigger an OOM.
memory allocation of 67108865000 bytes failed
Adding a bit of logging about the row types I narrowed the cause for the huge buffer allocation attempts to the behaviour of varchar
columns, and wrote this to illustrate how that varchar
field is presented to odbc-api
:
// [...] rest of the code example in https://docs.rs/odbc-api/0.17.0/odbc_api/buffers/struct.TextRowSet.html
match connection.execute("SELECT 'abc'::varchar", ())? {
Some(cursor) => {
println!("{:?}", cursor.col_data_type(1)?);
}
None => {
eprintln!("Query came back empty. No output has been created.");
}
}
The above code prints:
Varchar { length: 16777216 }
Varchars without a specified length are quite common in Snowflake databases, to quote the documentation (https://docs.snowflake.com/en/sql-reference/data-types-text.html#varchar):
A column only consumes storage for the amount of actual data stored. For example, a 1-character string in a VARCHAR(16777216) column only consumes a single character.
I'm not familiar enough with ODBC to understand whether this is a feasible suggestion but the first thing that pops to mind is that one could make a conservative initial allocation for variable length fields with very large max-lengths - so maybe allocate 1kb rather than the 64MB the field type suggests, and re-allocate when encountering data that's too large?
It looks like ODBC signals truncation of results due to undersized buffers (https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlfetch-function?view=sql-server-ver15#diagnostics), could this be something to respond to by increasing the buffer size?
Hello! Thanks for this amazing crate, such high quality and essential for accessing Snowflake using Rust.
When trying to upgrade from 0.36.1
to 0.39.0
I stumbled upon an issue starting from 0.37.0
where the buffer allocation is extremely slow. Wondering if I am using it wrong or there is some underlying issue.
The use case is querying JSON data in TEXT
columns (VARIANT
in Snowflake terms). Therefore a larger max_str_len
to cope with the possible extremes. Given by the amazing guide the batch sizes shouldn't be something out of the ordinary, except maybe the mentioned max_str_len
.
I'm running on Ubuntu 20.4.
This is a minimal example to trigger it (on my machine, and the CI I am using.)
use odbc_api::buffers::{BufferDescription, BufferKind};
const MAX_STR_LEN: usize = 65536;
const fn query_table_buffer_description() -> [BufferDescription; 1] {
[BufferDescription {
kind: BufferKind::Text {
max_str_len: MAX_STR_LEN,
},
nullable: false,
}]
}
fn main() {
for i in 1..8 {
let max_buffer_size = 10_usize.pow(i);
println!("Buffer size: {max_buffer_size}");
let now = std::time::Instant::now();
let buffer = odbc_api::buffers::buffer_from_description(
max_buffer_size,
query_table_buffer_description().into_iter(),
)
.expect("Error"); // For 0.36.1 remove this line.
println!("Got my buffer in {}s", now.elapsed().as_secs_f64());
}
}
For 0.36.1
this results in seeming like we only allocate a pointer which then grows to fit the use case? Or is the same allocation happening behind the scenes?
Buffer size: 10
Got my buffer in 0.0000336s
Buffer size: 100
Got my buffer in 0.0000057s
Buffer size: 1000
Got my buffer in 0.0000051s
Buffer size: 10000
Got my buffer in 0.0000049s
Buffer size: 100000
Got my buffer in 0.0000048s
Buffer size: 1000000
Got my buffer in 0.0000087s
Buffer size: 10000000
Got my buffer in 0.0000049s
Starting from 0.37.0
it results in
Buffer size: 10
Got my buffer in 0.0112486s
Buffer size: 100
Got my buffer in 0.1132033s
Buffer size: 1000
Got my buffer in 1.1229766s
Buffer size: 10000
Got my buffer in 11.4219948s
Buffer size: 100000
Got my buffer in 115.9940551s
Buffer size: 1000000
^C // Based on the increase we could assume ~1 000 seconds here.
Thanks again for the amazing crate!
Mathias
when I use odbc-api call conn.execute(sql, ())
, but the function return cursor
get None. Aactually, I expect row count result.
let sql = r#"UPDATE "request" SET "id" = 1, "name" = 'hallo', "created_at" = '2022-08-25 00:14:05 +08:00', "updated_at" = '2022-08-25 00:14:05 +08:00' WHERE "request"."id" = 1"#;
> let res:Option<CursorImpl<StatementImpl<'_>>> = conn.execute(sql.as_str(), ())?;
res
return None
.
How to get update sql get SQLRowCount with odbc-api
?
With reference :https://docs.microsoft.com/en-us/sql/relational-databases/native-client-odbc-results/processing-results-odbc?view=sql-server-ver16
I'm interested in exposing some more functionality from SQLDriverConnectW
which we currently use in connect_with_connection_string
:
odbc-api/odbc-api/src/handles/connection.rs
Lines 87 to 108 in 63c9646
Before I try to add anything I was wondering about how we could expose the following:
DriverConnectOption::Complete
or DriverConnectOption::Prompt
when running with a window. Also still be able to fall back to NoPrompt
if I don't have a window (e.g. running headless vs. in an application UI)If we can figure out a good approach for this, I'd be happy to implement it in a PR.
As so far, Cursor provides two methods to take the value of row :
I noticed that odbc-api
recommends using the bind_buffer
function, but I don't know how many rows there are in the execution result, how to set the parameters of this TextRowSet::for_cursor
about: batch_size
,max_str_len
?
With the Cursor ::next_row
function return CursorRow
, but CursorRow
provides very few methods, can you extend the method of getting the row, so that I don't need to care about the two parameters of bind_buffer
with TextRowSet::for_cursor
about: batch_size
,max_str_len
?:
Hello, I want to pass different types of InputParameters with Vec<Box<dyn InputParameter>>
but ParameterCollection isn't implemented for it. Is it possible to do this?
Hello,
I am trying to use the execute polling method on a connection, but the compiler keeps saying that the connection is not send, even though I promoted it to send.
This is the error
error: future cannot be sent between threads safely
--> src/main.rs:17:18
|
17 | let handle = tokio::task::spawn(async move {
| ^^^^^^^^^^^^^^^^^^ future created by async block is not `Send`
|
= help: within `odbc_api::Connection<'_>`, the trait `std::marker::Sync` is not implemented for `*mut odbc_api::odbc_sys::Dbc`
note: future is not `Send` as this value is used across an await
--> /home/pierre/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.49.0/src/connection.rs:149:85
|
142 | &self,
| ----- has type `&odbc_api::Connection<'_>` which is not `Send`
...
146 | ) -> Result<Option<CursorPolling<StatementImpl<'_>>>, Error> {
| __________________________________________________________________-
147 | | let query = SqlText::new(query);
148 | | let lazy_statement = move || self.allocate_statement();
149 | | execute_with_parameters_polling(lazy_statement, Some(&query), params, sleep).await
| | ^^^^^^ await occurs here, with `&self` maybe used later
150 | | }
| |_____- `&self` is later dropped here
Some testing code that produces it. I used the example for promote_to_send
and made it async.
use lazy_static::lazy_static;
use odbc_api::Environment;
use std::{thread, time::Duration};
lazy_static! {
static ref ENV: Environment = unsafe { Environment::new().unwrap() };
}
#[tokio::main]
async fn main() {
const MSSQL: &str = "Driver={ODBC Driver 17 for SQL Server};\
Server=localhost;\
UID=SA;\
PWD=My@Test@Password1;\
";
let conn = ENV.connect_with_connection_string("MSSQL").unwrap();
let conn = unsafe { conn.promote_to_send() };
let handle = tokio::task::spawn(async move {
let exec = conn.execute_polling("SELECT 1", (), || {
tokio::time::sleep(Duration::from_secs(1))
});
if let Some(cursor) = exec.await.unwrap() {
// dbg!(cursor);
}
// if let Some(cursor) = conn.execute("SELECT 1", ()).unwrap() {
// // dbg!(cursor);
// }
});
handle.await;
}
const CREATE: &str = r#"SELECT 1"#;
Cargo.toml
[package]
name = "spark-odbc"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
lazy_static = "1.4.0"
# odbc="*"
# odbc-safe="*"
# threadpool = "1.8.1"
odbc-api="*"
tokio = { version = "1.21.2", features = ["macros", "rt-multi-thread", "time"] }
Rust info:
rustc 1.64.0 (a55dd71d5 2022-09-19)
stable-x86_64-unknown-linux-gnu
Is there a way around this or should I go back to the sync API?
Would be nice if people could install odbcsv this using homebrew
Is the lack of BufferKind::U64
just caused my unimplementation or is it an ODBC limitation?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.