Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking changes in related external database systems #415

Open
joyfullservice opened this issue Jul 17, 2023 · 10 comments
Open

Tracking changes in related external database systems #415

joyfullservice opened this issue Jul 17, 2023 · 10 comments
Assignees

Comments

@joyfullservice
Copy link
Owner

Many database development projects that I do in Microsoft Access involve tables that come from database systems outside Microsoft Access. Sometimes this is due to performance or storage requirements, or other times it involves integrations with online systems that more commonly use MySQL databases on the back end.

A common scenario is where the front end is developed in Microsoft Access, but the back end tables are on a Microsoft SQL Server. All the changes in Access feed nicely into version control, but there currently isn't much visibility into the related changes that happened on the SQL server side. It would be nice to see those database architecture changes along with the front-end changes in the same repository, especially when the primary purpose for the SQL back end is to support the front-end application.

What I am proposing here is an extensible framework where different DBMS back ends could be connected with the current database project managed by the add-in. When you export changes for the project, it would also query the connected DBMS systems and update the related "source files" reflecting the external database system. This would be a READ ONLY, one-way sync that would only download the changes, not make any modifications to the external database system.

For my purposes, this would give me a linear view of back-end changes over time, right alongside the front-end database development I am doing in Microsoft Access. Most of the time these changes go hand-in-hand, so it seems logical that they could be reviewed and committed at the same time when working with version control. (Changing a column name in a simple implementation, for example, requires both a back-end and a front-end change.)

This framework would allow concrete implementations of the class for different database systems (MSSQL, MySql, Azure, etc...) with their own ways of generating DDL type representations of the database objects. You would have the flexibility to filter to just a subset of the objects, (if only a few tables were needed) or sync everything (tables, views, stored procedures, functions, etc...) if that was a better fit for the project. Syncing would be optimized for performance to be as efficient as possible, minimizing the need to regenerate object definitions if the underlying object hasn't changed.

In terms of storage, each external database connection would be listed in the projects vcs-options.json file, and each connection would have a dedicated folder under a databases folder in the export location. (The actual connection string would be stored in a .env file in this folder.) The database components would be grouped in folders, much like they are in the Access database source files.

.
├── ...
├── .gitattributes    
├── .gitignore                 
├── Database.accdb             
└── Database.accdb.src            
    ├── forms                
    │   ├── MyForm.bas       
    │   └── MyForm.json         
    ├── macros                   
    │   └── AutoExec.bas      
    ├── modules              
    │   ├── MyModule.bas    
    │   └── MyClass.cls          
    └── databases                         # Folder for external database connections
          ├── SQL_Prod                    # Connection to specific external database
          │   ├── .env                    # Connection string/credentials (not saved in version control)
          │   ├── tables                  # Type of object (tables)
          │   │   ├── orders.sql          # Table
          │   │   └── line_items.sql      # Another table
          │   ├── views                   # Another object type...
          │   │   ├── last_order.sql     
          │   │   └── order_totals.sql  

Note that the .env file would contain the actual connection string and any other authentication secrets needed to connect to the external database. This file would NOT be committed to version control. If you clone the project to another system, you would need to set the connection string on that computer to successfully update the files from the external database. If no .env file is found, the add-in will simply display a warning and continue, leaving the existing files intact.

A benefit to this approach is that you could define different connection strings on different systems. For example, you might have a dev environment where you have a copy of your production database that you use for testing and development. You could track all those development changes at the source level using the connected dev database without impacting your production environment. After making the changes to the production database, you could switch to the production branch, update the connection string and the next export would ensure that the production database now mirrors what you last exported from the development branch.

On the front end, I was looking at adding a tab to the options dialog where you can manage a list of external databases.

image

Here is what I was thinking for the dialog to edit the connection:

image

As far as what actually exports to "source files", that will depend on the options provided by the database server. Where possible, I would like to generate DDL statements, but for some objects a simple text representation may be the more consistent output. For Microsoft SQL Server specifically, I would plan to use sp_GetDDL if available, otherwise fall back to sp_help Recordset as currently used in clsAdpTable.

Why not just use other available tools? Well, I am encountering three main issues.

  1. There is no standard, free, open source tool that I could find that could span this functionality across multiple database systems.
  2. I am wanting something extremely performant. It is one thing to fire up MSSMS and click to generate scripts for a database while you grab a cup of coffee. You don't want that to happen every time you export source on your database project. I think we can do WAY better than that, utilizing an index to track changes and only export what actually changed on the server.
  3. Ideally, I would like to keep the development focused in Access without having to jump back and forth between different tools and disconnected processes to produce a comprehensive set of output files. I like the idea of one click that just does the whole thing.

I know this has been a long description, but please feel free to chime in if you have any suggestions or input regarding how I am fleshing out this idea. Essentially I am trying to bridge a gap in the development process by extending the add-in to (optionally) track changes in the related back-end RDMS platforms that I am using with my projects.

There is a risk in taking this approach in that we are adding more complexity to the add-in. More complexity means the code base is harder to understand and maintain. I am seeking to manage this by using class interfaces and concrete implementations, similar to what I did for database objects. My hope is that the positive benefit of the feature will significantly outweigh the weight of the added complexity.

Again, I would love to hear the thoughts of others from their experience in the field with this challenge of integrating front and back end changes into version control.

@bclothier
Copy link
Contributor

FWIW, I am not sure if I would want this in my projects. The reason for this is that because there are already tools available for managing the database. In the case of a SQL Server database, we can use Visual Studio Database Project which will then generate a bunch of DDL files among with the support for updating/synchronizing any changes to the schema which makes it much easier to manage changes and have a history of the change in the git repository. Therefore, I have a git repository with an Access database in one folder, SQL Database Project in other folder and whenever I make a change that requires synchronization (a good example of this would be adding a new column to a table, exposing it in a view used by an Access form, then adding the control to the form), I can just commit the changes from both projects in one go and my git history can then reflect both the DDL and the code changes together.

I am fairly sure MySQL has equivalent tooling. So does several other DBMSes. I looked at DBeaver which I've used for projects where I needed to support all kinds of DBMSes shudder, and it has a git extension:
https://marketplace.eclipse.org/content/dbeaver-git

So even if one's favorite DBMS didn't have the tooling include natively, they probably can get it via dbeaver or equivalent third party app. Therefore, I think this is not in the add-in's domain and we would be better off building a git repository that includes the folders & appropriate projects to support different tooling for each product.

Consider that we would run into a similar problem with a project that had a dependency on other stuff (e.g. a DLL for example); putting the source code for the DLL in its own folder under the git repository is likely the simpler path and does not require modifying the add-in.

Just my $0.02ZWD.

@mwolfe02
Copy link

Traditionally, I have managed this by committing DDL change scripts to my Access repositories. I built a custom Access application (SchemaUpdater) that manages the DDL changes and deploys them to our various backends, as well as exporting them for inclusion in version control. To prevent schema changes from being accidentally made outside of the SchemaUpdater, I have a database trigger that stops all DDL changes (the Access app temporarily disables the trigger when applying DDL changes).

I do like the idea of having a highly performant schema export. In addition to DDL change scripts, my custom version control script exports table definition details. Those exports tend to be very slow the way I've scripted them, so I don't run them every time.

One of the things I really like about this add-in vs. my own custom version control script is how fast it is. If the database export added significantly to that time, I might not use it...or would only use it occasionally (schemas tend not to change often).

As for managing the additional complexity, I'm confident that you could do that in a way that does not significantly add to the ongoing maintenance burden based on what you've demonstrated with the add-in's existing architecture.

From a broader philosophical and developer education standpoint, I also think there's value in encouraging people to include their database schema as part of their version control process.

This is a key tenet of my version control philosophy:

If it can cause a bug in your software, you should be tracking it in version control. Database schema changes absolutely fall into that category.
-- Tracking Schema Changes at nolongerset.com

@joyfullservice
Copy link
Owner Author

@bclothier and @mwolfe02 - thank you both for your insightful comments on this idea! You both shared some very helpful background from years of experience, and I am grateful for the input!

I can certainly appreciate the perspective of not trying to reinvent the wheel when other systems do a good job handling the creation and management of DDL scripts. Especially as you get into the more enterprise level development tools, there are some systems that handle this really well, and far better than anything I would create here. For those with access to those tools and familiar with using them, I would absolutely agree that using the higher end tools is a better option for the advanced developer.

I can also appreciate what Mike shared from his experience with some of the more basic scripting options being pretty slow and unwieldy for frequent usage in the everyday development process. For Access developers that are just making the transition to version control, it may be a daunting ambition to climb the learning curve of the more advanced tools, in addition to the financial investment required for most of them. My hunch is that many people out there would resonate with Mike's experience where tracking schema changes in their present workflow is a a bit clunky at best.

That is where I could see a feature like this being a helpful initial step. For no added cost, and minimal configuration required, users can easily start tracking schema changes to their back-end databases. As they experience the benefits of a more visibility into the schema changes, some may ultimately move on to more professional tools and complex workflows like schema migrations and automated testing. Others may find this simple implementation adequate for their current needs.

As Mike pointed out, performance is a really big factor. I started doing some proof of concept testing today, and I am pleased with what I am seeing so far. The actual generation of the DDL is a bit slow. I don't think there is much of a way around that, no matter what tool you are using. But where I think we will see a huge gain is that we don't need to regenerate all the DDL files every time. Retrieving over 5K objects with their modification dates from one database on our SQL server takes less than a second, so I am (optimistically) hopeful that with some efficient dictionary index comparisons, we can scan for changes in a second or two. I will post further updates after I finish out some initial testing on this.

@mwolfe02
Copy link

Sounds promising!

@joyfullservice
Copy link
Owner Author

This afternoon I was able to finish some initial proof-of-concept testing to see what kind of performance we could get on an index based schema export. I am really happy with what I am seeing!

On a smaller SQL database with about 60 objects, the full run to export all the objects took about 9 seconds. The second run (with no schema changes) finished in about a tenth of a second! (That is checking the last modified dates in SQL and comparing them with the dates saved in the index from the last export.)

--------------------------------------------------
                PERFORMANCE REPORTS
--------------------------------------------------
Operations                    Count     Seconds
--------------------------------------------------
Run sp_GetDDL on USER_TABLE   36        7.84
Run sp_GetDDL on SQL_STORED_P 12        0.36
Run sp_GetDDL on VIEW         13        0.30
Retrieve SQL Objects          1         0.20
Write File                    62        0.03
Loop through SQL objects      1         0.02
Run sp_GetDDL on SQL_INLINE_T 1         0.02
Parse JSON                    1         0.01
Verify Path                   62        0.01
Update Schema Index           62        0.00
Read File                     1         0.00
--------------------------------------------------
Other Operations                        0.00
--------------------------------------------------


--------------------------------------------------
                PERFORMANCE REPORTS
--------------------------------------------------
Operations                    Count     Seconds
--------------------------------------------------
Retrieve SQL Objects          1         0.05
Parse JSON                    1         0.01
Loop through SQL objects      1         0.00
Read File                     1         0.00
--------------------------------------------------
Other Operations                        0.00
--------------------------------------------------

For a an even bigger test, I ran this against a large CRM production database with over five thousand objects. As you can imagine, the first run took a long time. (About 6.8 minutes) The real test was on the second run, and I was impressed that it compared every object date with the index in under one second!!

--------------------------------------------------
                PERFORMANCE REPORTS
--------------------------------------------------
Operations                    Count     Seconds
--------------------------------------------------
Run sp_GetDDL on SQL_STORED_P 4305      197.30
Run sp_GetDDL on USER_TABLE   988       191.64
Run sp_GetDDL on SQL_SCALAR_F 278       11.49
Write File                    5740      3.91
Run sp_GetDDL on SQL_TABLE_VA 83        3.58
Loop through SQL objects      1         1.98
Run sp_GetDDL on SYNONYM      36        1.03
Run sp_GetDDL on SQL_INLINE_T 32        1.02
Verify Path                   5740      0.88
Run sp_GetDDL on VIEW         18        0.43
Update Schema Index           5753      0.34
Retrieve SQL Objects          1         0.11
Create Folder                 3         0.00
--------------------------------------------------
Other Operations                        0.16
--------------------------------------------------


--------------------------------------------------
                PERFORMANCE REPORTS
--------------------------------------------------
Operations                    Count     Seconds
--------------------------------------------------
Parse JSON                    1         0.70
Loop through SQL objects      1         0.17
Retrieve SQL Objects          1         0.10
Read File                     1         0.01
--------------------------------------------------
Other Operations                        0.00
--------------------------------------------------

Obviously this is just preliminary testing, and there will be a tiny bit of overhead added for UI updates and logging, but I am really happy with the performance I am seeing. For my purposes, this will be well within the range of an acceptable processing load for the added benefit of including schema updates in my VCS exports. 👍 😁

joyfullservice added a commit that referenced this issue Jul 20, 2023
Forms to manage the external database connections exported in connection with regular VCS exports. See #415
joyfullservice added a commit that referenced this issue Jul 20, 2023
I can now add, edit and delete external database connections. The connection strings are stored in .env files, not included in the options file. #415
joyfullservice added a commit that referenced this issue Jul 20, 2023
Initial rough draft of retrieving SQL server object DDL. #415
joyfullservice added a commit that referenced this issue Jul 21, 2023
Schema export is now running as a part of the export process. #415
joyfullservice added a commit that referenced this issue Jul 21, 2023
This allows you to filter to only export a subset of the database objects. (Note that this still pulls a full list of objects from the SQL server, but only actually exports the ones that match the filter.) #415
@hecon5
Copy link
Contributor

hecon5 commented Jul 28, 2023

I'll toss my two cents in here.

I agree; DB specific tools are far superior for editing / managing.

What we struggle with is enforcing version control on those DBs, or at least tracking the changes, because we don't have a good way to build that in with the environment setup we have. What usually happens is the change is made on the backend, things are good, we move on. Because we don't need to deploy a change to the front end, our devs may be tasked with other things, and once it's good, they move on.

But if AccessVCS could track those changes and we could commit the front/back end changes concurrently (often a backend change is combined with front end changes), we'd be better at even tracking the change happened, and reviewing them, too.

So while this may be slower than other tools, I'd be happy to see this if nothing else than to encourage us to track SQL config changes better.

joyfullservice added a commit that referenced this issue Jul 29, 2023
Fleshed out some functions that utilize Windows API calls for extremely fast scanning of files and modified dates. This runs significantly faster than iterating through files using FSO, and resolves a critical performance bottleneck when scanning database schemas with large numbers of objects. #415
joyfullservice added a commit that referenced this issue Jul 29, 2023
I will describe this more in #415, but I took a different approach that is simpler and faster for keeping a copy of source files sync'd with the related database objects.
@joyfullservice
Copy link
Owner Author

I have some great news to share! Over the past week I have been retooling and refining the performance of this feature, and I am excited to report that the performance is coming out even better than I had hoped for! Scanning for changes in a massive SQL database with over 5K objects is completed in 0.38 seconds! After adding two external SQL server databases to my VCS add-in project, I can still run an export with no changes in less than one second!

--------------------------------------------------
                PERFORMANCE REPORTS
--------------------------------------------------
Category                      Count     Seconds
--------------------------------------------------
Huge DB (MSSQL)               1         0.38
Modules                       0         0.15
Small DB (MSSQL)              1         0.08
DB Properties                 0         0.03
Themes                        0         0.03
Nav Pane Groups               0         0.02
Tables                        0         0.01
Forms                         0         0.01
DB Connections                0         0.01
Doc Properties                0         0.01
Hidden Attributes             0         0.01
VBE References                0         0.01
Queries                       0         0.01
Table Data Macros             0         0.00
Shared Images                 0         0.00
VB Project                    0         0.00
Project                       0         0.00
Proj Properties               0         0.00
Macros                        0         0.00
Relations                     0         0.00
IMEX Specs                    0         0.00
Table Data                    0         0.00
VBE Forms                     0         0.00
Reports                       0         0.00
Saved Specs                   0         0.00
--------------------------------------------------
TOTALS:                       2         0.77
--------------------------------------------------

--------------------------------------------------
Operations                    Count     Seconds
--------------------------------------------------
Loop through SQL objects      5802      0.16
Parse JSON                    3         0.13
Scan DB Objects               1         0.11
Console Updates               4         0.09
Retrieve SQL Objects          2         0.07
Clear Orphaned Files          6         0.06
Convert to JSON               344       0.06
Get File Listing (API)        9         0.04
Increment Progress            103       0.04
Get VBA Hash                  89        0.04
Read File                     5         0.03
Scan External Databases       1         0.02
Compute SHA256                102       0.01
Quick Count Objects           1         0.00
Check for linked table        6         0.00
Clear Orphaned Schema Files   2         0.00
Quick Count Files             1         0.00
Clear Orphaned Folders        1         0.00
--------------------------------------------------
Other Operations                        0.02
--------------------------------------------------

One of the biggest performance gains came out of some refactoring on the index concept. Essentially, we want to have a set of source files that reflect the current state of the SQL server objects. Since this is a one-way sync (pulling changes from the SQL server to source files) we don't have to worry about tracking changes the other direction.

Initially I was using a JSON index to manage those changes, similar to the index we use to manage the state of the Access database. This worked pretty well, but did involve some minor performance issues on very large sets of files. (Particularly in parsing the JSON) I also knew that I needed to scan all of the files each time we sync, to check for any changes that happened on the source files. (I.e. missing, extra, or changed source files.)

As I was discussing this project with one of my coworkers, we had an idea that proved to be a game-changer. While the JSON index was a workable solution, it had some challenges. Since the index is excluded from version control, each developer would have to download all of the DDL files when they initially exported the project. Even though the source files themselves were already the right versions, the index would have to be regenerated for each developer, which means the files would have to be regenerated again. That's probably okay, but definitely not ideal.

That's when the idea came to use the file modified date to match the SQL object modified date. By doing this, the files themselves become the index, and for schema exports, we can do away with the JSON index altogether! Let me explain what I mean by this...

Let's say I have a file called MyTable on the SQL server that was last modified on 1/1/2023. If I export this file to a SQL file, and set the file modified date to 1/1/2023, I now have a matching source file. If the server object changes, the dates will be different, and the file should be replaced with an updated copy. If the source file is changed, the modified date will also change, requiring an update from the server. Either way, the source file would be updated during the next export. (No additional index required)

Since the source file is already committed to the version control system, this date is preserved for all of the developers. If John checks out the project on a different computer, the modified dates will already match the dates of the objects on the SQL server, so he will not have to regenerate the files at all. The add-in will simply perform a lightning fast scan of the file modified dates, comparing them with the SQL objects, and he will be good to go.

I realize that this is a bit of a non-conventional approach to actually change the modified dates of the source files, but in this particular application, the simplicity and performance benefits are huge. It also seems to make some intuitive sense that if my object on the SQL server was modified on 1/1/2023, I will see that same date in my source file. I can even sort my source files by date modified to get a quick overview of when things were changed on the server.

About those file dates...

Setting a file modified date seems like a pretty trivial thing, but this turned out to be the task of many hours this week. It was really important to me that the modified date I see in Windows Explorer matches what I see on my SQL server. However, applying the correct time zone and daylight savings time for historical dates is a non-trivial task! Even Windows isn't entirely consistent in how these dates are displayed.

In NTFS, the actual file dates are stored in UTC (Universal Coordinated Time, or GMT). But they are displayed in Windows explorer based on the current time zone settings and daylight savings time. For historical dates, Windows uses a registry key with a database of dates on when to apply DST. The date you see in Windows Explorer can be different from the date returned from FileSystemObject.GetFile().LastModified. FSO and DIR (in the command prompt) apply the current dst bias, not the one in effect at the time the file was modified. Yes, it was fun... 😁

The happy ending is that I did eventually figure out the API calls needed to return the date that matches Windows explorer based on the built-in Windows DST rules. (The more technically correct time, especially from an end-user perspective.)

While setting the modified date to match what you see in SQL server is very intuitive, it won't work as well if different developers are in different time zones. What I plan to do for this is provide an option (at the schema level) to use UTC dates if preferred. I think I will have the default be the local date, since that would represent the majority of developers, but UTC will be available for anyone that needs it. If you check that option, the file modified date would be set to a UTC time rather than a local time.

@mwolfe02
Copy link

@joyfullservice, when I first started reading your latest comment, I immediately thought, "Wait until Adam has to deal with all the time zone and daylight savings time issues." Time is like character encodings. Dead simple at first glance (everyone knows how time and letters work, right?) but positively brimming with complexity the further you get into the weeds (and the weeds go deeeeep). I was very pleasantly surprised, then, to see that you had already bumped into and overcome most of those challenges. Well done!

joyfullservice added a commit that referenced this issue Aug 1, 2023
This allows you to enable or disable a particular external database connection. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
The dictionaries are now effectively merged, including shared keys. (Incoming values replace existing values where the keys match.) This is used for overlaying .env options on top of a defined schema entry. (For example, to disable sync on a particular computer.) #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
A file date can be set using UTC or Local Time. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
Refactored the handling of schema options to allow .env options to override the saved schema options. Also adjusted form layout to include new options and controls. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
If no more source files (or other files) exist in the object type subfolders, remove the subfolders. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
The ideal is to install sp_GetDDL at the server level, and run it as a system stored procedure against any database on the server. But this may not be possible in some environments. If you can't use it as a system SP, then it can be used as a local SP in the target database. If you can't install it at all, then you can fall back to built-in SQL functions that will produce some usable output. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
A major usability boost here... Rather than leaving the user to try to figure out how to create a connection string, we will load in existing connection strings used by linked tables and pass-through queries in the current database. In most cases the user can simply select one of the connection strings and start tweaking the filter right from there. #415
joyfullservice added a commit that referenced this issue Aug 1, 2023
Fallback definitions are working as intended. #415
@joyfullservice
Copy link
Owner Author

I have merged my changes into the dev branch, and we should be ready to do some testing! I need to update the documentation on the wiki, but for now, here are a few notes on how this works...

image

image

Right now Microsoft SQL Server is the only external database supported (so far). If you have sp_GetDDL installed on the database server, or on the target database, the schema export will use that. If that sp is not available, it will fall back to native SQL commands with text output for tables. (sp_help for tables, and object_definition for other objects)

One of the goals was to make this as intuitive as possible, and I really wanted a simple way for a user to add a connection string. After a fair bit of research, I could not find a (reasonable) way to launch the ODBC connection dialog and return a connection string, so I took a different approach.

Taking advantage of @bclothier's excellent work on the database connections, I tapped into this class to build a list of the unique connection strings used in the current database, and provided those as suggested options in a combo box. The user can select one of those, or enter their own connection string.

image

Most of the time if you are interacting with an external database you will have at least one linked table or pass-through query in your front-end database. Exporting the schema for a selection of objects on that server is as simple as selecting one of the connection strings from the list, then tweaking and testing your filter to return the desired objects.

The filter uses one rule per line, and is processed sequentially through all the rules. The last matching rule will apply to each object. Comment lines are prefixed with # and exclusions with !. If no filter is entered, all objects are returned. The filter is tested against what will become the file name of the object, including the component type subfolder. Here are some examples:

# Include all tables
tables\*

# Exclude some tables based on name
!*TEMP_*

# Also include a specific view
views\my_special_view.sql

As far as the object dates, you have an option to Use UTC Dates if you are sharing a project across time zones and want to use a consistent date in the source files to match the server objects. For smaller operations in a single location, leaving this unchecked allows you to see a local date on your system that matches the modified date you see on the SQL server.

You also have the option of saving the connection string (and any other connection parameters) in a local .env file that is not included in version control. Using this file you can make the connection enabled on some computers, but not on others. You can also specify a different connection string or driver to retrieve and sync the same objects.

If you are using integrated AD authentication and/or the connection string isn't that sensitive in your environment, you can uncheck this box and the connection string will be saved with the project options and available to others that check out the project without having to set up the connection string on each computer. I left this option checked by default, so new users will default to the more secure approach of not inadvertently adding a connection string to version control.

If you build the add-in from source on the dev branch, you can test this out and let me know what you think. 😄 I am seeing some pretty amazing performance in my environment, but of course that is dependent on your specific setup and database connection.

joyfullservice added a commit that referenced this issue Aug 2, 2023
Extend this query to reflect the date of any dependent objects. (Returns the max modified date of the object itself, or any dependent object.) This way if you modify a trigger on the SQL server, it will perform an export of the (parent) table object, which includes the trigger DDL. #415
@hecon5
Copy link
Contributor

hecon5 commented Aug 3, 2023

Just a note, after looking into possible issues for why #420 is happening, I realized that sp_getDDL isn't a standard procedure. Since this leverages sp_getDDL, it would be good to make it obvious the procedure isn't a standard one, and that it needs to be manually defined from https://www.stormrage.com/SQLStuff/sp_GetDDL_Latest.txt to be available.

It took me a few times reading through this whole thread and internet sleuthing before it clicked I needed to add it to my server. Since we don't have master access on our server, the other options should fail back, but just a note for usability.

joyfullservice added a commit that referenced this issue Aug 7, 2023
You can now export basic object definitions from MySQL databases. (Probably far less used than Microsoft SQL Server, but still helpful for some of my projects.) #415
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants