Replies: 8 comments 8 replies
-
I have a working example with strings and can see how to use a a switch statement on result.column_c_datatype to do json j;
for(int i=0; i< result.columns(); i++) {
j[result.column_name(i)] = result.get<string>(i);
}
cout << j; |
Beta Was this translation helpful? Give feedback.
-
The column type is run-time information per result-set, so it is necessary to resolve it for every result-set, for first row only is enough, then cache that (meta)information (e.g. in form of a table mapping type identifier to function pointer). |
Beta Was this translation helpful? Give feedback.
-
variant does indeed look cleaner. However, I will see what I can do with a the vector of pointers or vector of lambdas approach Thanks! |
Beta Was this translation helpful? Give feedback.
-
I'd first go for plain switch as simplest to code and let/see if the compiler can optimise it for me, like here https://stackoverflow.com/a/30062376 |
Beta Was this translation helpful? Give feedback.
-
Good point. I will actually codgen that C++ switch in SQL! The embedded templating processing in SQLite that I did a few weeks ago (https://github.com/phrrngtn/sqlite-template-inja) is turning out to be very useful. In fact, the motivation to do the ODBC embedded in SQLite so that I can do catalog queries against remote, heterogenous databases and then run that through some DDL templates to generate some local, derived artifacts. Thanks for your help. I have done a bit of low-level ODBC programming in the past and it is just great how much simpler the application code is with nanodbc. |
Beta Was this translation helpful? Give feedback.
-
I have some working test code in a variety of ways of doing it. nanodbc was great! I did not end up using code-generation as the fairly simple collection of JSON types meant that only about 6 functions had to be written and the signature is the exact same for each so I did not have to get too involved in the templating details. The plumbing with SQLite needs to be done next but the array-of-dicts is very simple to deal with so will not require much work in the extension. There is some potentially tricky stuff with connection pooling/caching but I am going to ignore most of that for now. I may do some work with json arrays for bind values for prepared statements and perhaps some support for multiple result-sets. I don't think there is much if anything that needs to be done with nanodbc at all. Perhaps a new code snippet demonstrating generating json from a result-set? typedef void (*column_getter_function_pointer)(json& , nanodbc::result&, short);
void get_string_value(json& jv, nanodbc::result& result, short column_number){
jv = result.get<string>(column_number);
} Here are the column names and the getters being initialized from the result metadata. void result_to_json(nlohmann::ordered_json& retval, nanodbc::result& result){
int n = result.columns();
std::vector<string> column_names(n);
std::vector<column_getter_function_pointer> function_pointers(n);
for(int i=0; i< result.columns(); i++) {
column_names[i]=result.column_name(i);
function_pointers[i] = column_to_function(result, i);
} and here is the nested loop that goes over the results row by row and column by column. I wasted a lot of time trying to figure out how to deal with null on a type-by-type basis which led me into some deep C++ weeds. Fortunately, the solution turned out to be very straightforward because of the heavy lifting done by the JSON library. while (result.next())
{
// very nice to have the keys in the select order
nlohmann::ordered_json j;
for(int i=0; i< result.columns(); i++) {
json jv;
if (result.is_null(i)) {
jv=nullptr;
} else {
(*function_pointers[i])(jv,result,i);
}
j[column_names[i]]=jv;
}
retval.push_back(j);
}
} The column_datatype is used to map to the appropriate 'getter' and is just a switch statement. column_getter_function_pointer
column_to_function(nanodbc::result& result, short column_number){
auto sql_type = result.column_datatype(column_number);
column_getter_function_pointer fp;
fp = &get_null_value;
switch (sql_type) {
case SQL_SMALLINT:
case SQL_BIT:
case SQL_INTEGER:
case SQL_TINYINT:
case SQL_BIGINT:
fp = &get_int_value;
break;
case SQL_FLOAT:
case SQL_REAL:
case SQL_NUMERIC:
case SQL_DECIMAL:
case SQL_DOUBLE:
fp = &get_float_value;
[-] |
Beta Was this translation helpful? Give feedback.
-
This turned out very nicely on the SQLite side also: all I had to do was write an select J.value->>'$.spid' as spid,
2*(J.value->>'$.spid') as twice_spid,
J.value->>'$.utc_date' as utc_date,
J.value->>'$.name' as name,
J.value->>'$.object_id' as [object_id],
J.value->>'$.type_desc' as type_desc
FROM json_each(
openrowset_json(
"Driver={SQL Server};Server=.\TGRID4ALL;Database=rule4;Trusted_Connection=yes;",
"select @@spid as spid,GETUTCDATE() as utc_date, * FROM sys.objects"
)
) AS J; I will put the extension up on github when it is a little more polished. The code is just about 200 lines including comments which is only about 10% of what I feared may have been needed. |
Beta Was this translation helpful? Give feedback.
-
It is available as https://github.com/phrrngtn/sqlite-embedded-odbc |
Beta Was this translation helpful? Give feedback.
-
Hi, I am a database oriented programmer active in SQL and Python, mostly on SQL Server and SQLite and have some limited experience with ODBC via pyodbc. I am interested in embedding nanodbc within SQLite as an extension with a view to implementing some Foreign Data Wrapper capabilities within SQLite. Ultimately, I would like to map remote tables into local virtual tables. My initial goal is more modest and that is what I would like some help with here.
What would be a good (low code) way of serializing an ODBC result-set to JSON using something like https://github.com/nlohmann/json ? I would like to use this technique to bootstrap my implementation with simple scalar functions that return JSON and post-process the result blob into a result-set using SQLite's built-in JSON processing capabilities. I have done very little C++ programming since the early 1990's so I think I have a fair idea of how to do this in a basic way e.g. retrieving the entire resultset, one array per column and serializing each column using techniques described in https://github.com/nlohmann/json#conversion-from-stl-containers and returning a header+body representation.
However, I am interested if you have any ideas on more slick/modern ways to do it, perhaps using iterators (or some other C++ mechanism that I don't know about) and techniques from https://github.com/nlohmann/json#how-do-i-convert-third-party-types and return an array of dicts, or header + array of arrays/dicts.
I have been hugely impressed with the power and performance of the JSON processing in SQLite (especially from 3.38 onwards) and have been very pleasantly surprised by how easy some solutions work out by separating stabilization/persistence (i.e. getting the blob into the database) and normalization (shredding a subset of the data into regular tables with primary keys, foreign keys etc.)
Looking forward to suggestions. I hope that given the very narrow definition of the problem, I will be able to get away with learning a small subset of modern C++. I would like to avoid an outcome where I end up 'writing C in C++'. Note that I am not looking for any assistance wrt the SQLite C API side of things as I have up to date experience with that.
thanks in advance,
pjjH
Environment
All OS; C++11
I will do the development on WSL2 (Ubuntu) with g++ and unixODBC as the driver manager and SQL Server and PostgreSQL backends.
Beta Was this translation helpful? Give feedback.
All reactions