Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stored Procedure support in Massive #293

Open
mikebeaton opened this issue Feb 3, 2017 · 8 comments
Open

Stored Procedure support in Massive #293

mikebeaton opened this issue Feb 3, 2017 · 8 comments

Comments

@mikebeaton
Copy link
Contributor

mikebeaton commented Feb 3, 2017

I have updated the code which I submitted as PR #281 so that it supports stored procedure and function calls in Oracle. (Only my own copy. As of now I haven't updated the PR and I haven't made a new PR.)

    var db = new DynamicModel("Scott.ConnectionString.Oracle (ODP.NET)");
    dynamic procResult = db.ExecuteProcedure("findMin", new { x = 1, y = 3 }, outParams: new { z = 0 });
    dynamic fnResult = db.ExecuteFunction("findMax", new { x = 1, y = 3 }, returnParams: new { returnValue = 0 });
    dynamic squareResult = db.ExecuteProcedure("squareNum", ioParams: new { x = 4 });

These work. The return value (procResult.z, fnResult.returnValue, squareResult.x) is an int, in all cases.

I was never aiming to support things like cursor results in Oracle(!) - I'm not sure that would ever really have a place in Massive.

I am (and was) aiming to add handy, lightweight support for calls to ExecuteNonQuery for CommandType.StoredProcedure, with support for parameter names and directions.

As I expected, and hoped, this basically just involved (installing Oracle XE; getting it working; getting the Massive Oracle tests working; and then) some minor changes to Massive.SP.cs (which was always meant to be DB agnostic code), and porting the (equally minor) parameter name and direction support from SQL Server file to the Oracle file as well.

The new code also supports #263:

    var db = new DynamicModel("Scott.ConnectionString.Oracle (ODP.NET)");
    dynamic intResult = db.ExecuteSqlBlock("begin :a := 1; end;", outParams: new { a = 0 });
    dynamic dateResult = db.ExecuteSqlBlock("begin :d := SYSDATE; end;", outParams: new { d = DateTime.MinValue });

These work. The return types for intResult.a and dateResult.d are correct (int and DateTime respectively).

As in the original PR, all typing is done implicitly, as Massive already does it. In the case of output parameters, this requires a(n otherwise unnecessary) placeholder value of the desired type.

@FransBouma
Copy link
Owner

Ok!

One thing I wondered about a couple of days ago, which you seem to have handled, is whether or not it's logical to be able to execute a proc on a DynamicModel that's tied to a table (e.g. dbo.Products): the procedure called has potentially nothing to do with the table targeted by the DynamicModel, so it would make sense to allow this only on a DynamicModel which has no table set, and if it has a table set and you call a procedure it should throw an exception. (at least, on paper that seems the right thing to do. In practice it might sound ridiculous). Another way to do this might be to create a subclass of DynamicModel with the ExecuteProcedure method which automatically passes no name for the table, so it's only usable for procs.

For output parameters, I recon you need to set it to a value that represents the type? I.e.

dynamic intResult = db.ExecuteSqlBlock("begin :a := 1; end;", outParams: new { a = null });

won't work? (passing a value for output parameter is not obvious, as the value is never used, so some people might think 'how do I specify the parameter?')

You introduced 'ExecuteSqlBlock' but there's already an Execute() method which accepts a query as string. Not sure if ExecuteSqlBlock is therefore needed?

About the cursors: you can return a set of expando's (one for every row), like a normal resultset, but it is rather tedious to do for oracle/postgresql...

@mikebeaton
Copy link
Contributor Author

mikebeaton commented Feb 3, 2017

Hi Frans,

I already use this code in my own project, and once I have a Massive handle to a table (actually to a view, in most cases) I also use it to execute SPs related to the table (actually, to the table underlying the view). I'd rather not break that usage pattern! It can also be used for entirely non-related things, but... I think that is fine! Massive is lightweight and flexible, so it seems wrong to artificially limit this.

Absolutely correct about output parameters. It is not obvious. It would need to be clearly flagged up in the documentation (as what it is, a lightweight way to set parameter types via placeholder values) - and I was already thinking that perhaps it should also be flagged up with an explicit warning, via an exception, if a null was sent as the value for any output parameter type? (And I've already flagged it up in the IntelliSense comments for the methods.)

There is one more thing about using implicit typing like this: it prevents passing NULL as the intended input value to inParams and ioParams. I think there should be a way to work round this for inParams, though not for ioParams, in which case it is much less of a limitation (though still some). I need to do more work on this. Personally, I think this is enough of an edge case to be considered acceptable given the advantages (specifically, the working, lightweight calls above!).

I see that you are right about the overlap in functionality between Execute and ExecuteSqlBlock. You are right that that needs addressing. None of my existing code attempts to support transactions yet either, this code reminds me that it should.

@mikebeaton
Copy link
Contributor Author

mikebeaton commented Feb 7, 2017

I've been continuing to work on this. I have basic function support in Postgres:

	public class dateArgs
	{
		public DateTime d { get; set; }
	}
	static void Main(string[] args)
	{
		var db = new DynamicModel("Northwind.ConnectionString.PostgreSql (Npgsql)");

		dynamic procResult = db.ExecuteProcedure("find_min", new { x = 5, y = 3 }, outParams: new { z = 0 });
		dynamic fnResult = db.ExecuteFunction("find_max", new { x = 6, y = 7 }, returnParams: new { returnValue = 0 });
		dynamic squareResult = db.ExecuteProcedure("square_num", ioParams: new { x = 4 });
		dynamic addResult = db.ExecuteFunction("add_em", new object[] { 4, 2 }, returnParams: new { RETURN = 0 });
		dynamic dateResult = db.ExecuteFunction("get_date", returnParams: new dateArgs());
	}

The above all work, with correct types (procResult.z is int, dateResult.d is DateTime, etc.)

For all databases, all params arguments can now take object[], ExpandoObject, NameValueCollection (or subclass), anonymous type or POCO. Where anonymous type or POCO is sent in, the type information is now retained and used (property type is used to set param type when property value is null).

Null parameter support now fully works. If you engineer a class with a typed null property, it will correctly generate a typed parameter with Value=DBNull.Value (get_date example above); if you pass in a name:value pair as before, this will still work when it can (non-null values for any direction; both null and non-null values for input direction) and will give an InvalidOperationException when it cannot (i.e. a null value without type information for output, inputoutput or return directions). But this is /only/ a limitation on name:value pairs now (which of course have no type information for null). Type information now will be used whenever it is available, and typed null parameters for all directions /can/ now be easily specified. Basically, everything works the way it feels like it should!

The add_em example above shows additional Postgres-specific support for anonymous arguments. A DB specific callback to check support for this feature ensures that a meaningful InvalidOperationException is thrown if it is accessed on other DBs.

I've renamed ExecuteSQLBlock to (a new variant of) Execute, and properly integrated everything into Massive.Shared.cs.

The nature of Postgres functions (often returning records) makes it clear that parameter name and direction support is needed for Query as well as for Execute. Supporting Query in the case where the directional parameters are correctly used, but their return values are not available after the call (since Query already returns something else) is definitely possible, and I believe relatively simple given where I am now. But I also want to investigate whether adding support for query result sets /and/ output parameter values is a reasonable thing to do (since at that point this really would be quite full-featured stored procedure support from relatively lightweight code).

@FransBouma
Copy link
Owner

Looks good. The resultset part is a different feature I think. Don't load everything into one feature as it might then become a big project. Small steps :)

@mikebeaton
Copy link
Contributor Author

In order to have non-breaking SP support I think it is necessary to have a method name like ExecuteSqlBlock (or ExecuteWithParams, or something), in addition to the existing Execute, to use for making calls like:

    var db = new DynamicModel("Scott.ConnectionString.Oracle (ODP.NET)");
    dynamic intResult = db.ExecuteSqlBlock("begin :a := 1; end;", outParams: new { a = 0 });

This is a variant of Execute, with param names and directions. But if I add it as public dynamic Execute(string sql, object inParams = null, object outParams = null, object ioParams = null, object returnParams = null, bool isProcedure = false) in addition to the existing public virtual int Execute(string sql, params object[] args), then this causes a call like db.Execute("UPDATE Companies SET Active=0 WHERE CompanyID=@0 AND DepartmentID=@1", 1, 4), which would previously have been interpreted as having two args, to become misinterpreted as having one input and one output param.

Just pointing this out, it doesn't need any further discussion!

@mikebeaton
Copy link
Contributor Author

mikebeaton commented Feb 13, 2017

I have coded in my SP project a non-conflicting variant of Query, following the existing Massive pattern (yield return IEnumerable<dynamic>), but supporting parameter names, types & directions. As expected, the basic code change to add this in additon to Execute support for these features was minimal. (Of course, I'm continuing to find ways to improve and clean up everything else.)

This automatically supports PostgreSQL RECORD and SETOF RECORD function return types:

	// read RECORD result
	var record = db.QueryProcedure("test_vars", inParams: new { w = 2 }).FirstOrDefault();

	// read SETOF RECORD result
	var setOfRecords = db.QueryProcedure("sum_n_product_with_tab", inParams: new { x = 10 });

Also SQL Server table valued functions (slightly indirectly, due to a difference in what the underlying provider supports):

	var tfnResults = db.QueryWithParams("SELECT * FROM dbo.ufnGetContactInformation(@PersonID)", new { @PersonID = 35 });

Also Oracle pipelined results (along the same pattern as SQL Server table-valued functions):

	var record = db.QueryWithParams("SELECT * FROM table(GET_EMP(:p_EMPNO))", inParams: new { p_EMPNO = 7782 }).FirstOrDefault();

Can I ask a question? If I understood you correctly #281 (and having of course researched this myself too), there is no pattern of DbCommand and DbParameter settings which can read Oracle or Postgres cursors. To add that support would require either using Oracle or PostgreSQL specific database classes (which you said #281 was "out of the question") or using a DbDataAdapter to fill a DataSet (which is not an existing Massive pattern). Is that right?

(I thought the approach here http://www.sqlines.com/postgresql/npgsql_cs_result_sets Stored Procedure - Working with a Single Result Set in C# might work with Npgsql without needing to use the Npgsql specific classes, but from an initial test it does not seem to work, instead passing back out a more or less useless reference to the cursor, as you indicated... [ADDED: I've realised the approach on that page used to work, but has been retired in Npgsql 3.0 - that seems a shame.])

@mikebeaton
Copy link
Contributor Author

mikebeaton commented Feb 14, 2017

Update (new research): Npgsql used to support that pattern. If they still supported that pattern, then reading back cursor results in PostgreSQL would just work, using the yield return IEnumerable<dynamic>over .ExecuteReader() pattern.

They stopped supporting that pattern because they didn't think anyone was using it, and couldn't see why it was in the code. npgsql/npgsql#438 I think there may be a lesson for us all in there, somewhere. ;)

NB (Also new research) Oracle can also successfully read back cursor results using the existing yield return over .ExecuteReader pattern, it just needs this line ((OracleParameter)p).OracleDbType = OracleDbType.RefCursor applied to cursor return variables; which FWIW can be written instead with reflection and no explicit dependency on Oracle.DataAcccess as:

	PropertyInfo pinfoTypeField = p.GetType().GetRuntimeProperty("OracleDbType");
	pinfoTypeField.SetValue(p, Enum.Parse(pinfoTypeField.PropertyType, "RefCursor"));

That's not a guess - it actually works. :)

@roji
Copy link

roji commented Feb 14, 2017

Just referencing this Npgsql refcursor discussion.

tl;dr: Npgsql-specific types aren't actually required for dereferencing refcursors, and single-resultset PostgreSQL functions should really return a table rather than a refcursor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants