Tuesday, May 9, 2017

T-SQL Tuesday #90 – Shipping Database Changes

Last month I finally figured out what is a TSQLTuesday after I have "accidentally" used a hashtag that I did not fully understand. This time around I want to contribute to the topic of a month: "Shipping Database Changes."

As a full-time developer, this particular subject is close and dear to my heart. When it comes to Shipping Database Changes, it is an absolute necessity for all developers. Since this is such a broad subject, I want to focus on specifics of how I manage it in my current work environment.

First a little bit of a background on my current role and the company I where I work. My official title is a Senior Software Developer for CivicaCMI, my primary responsibility is to developer new software. Even though I end up writing my share of .NET code at the end of the day I am a SQL Server developer who writes hundreds of lines of T-SQL a week to produce software that matches specification.

Most of my work is broken down into ticket items that request a specific feature or address a particular bug. I would say that 90% of update requests take only from several minutes to several hours to complete. As long as changes restricted to a single object, it is very easy to generate a script file. I have adapted a habit of creating change scripts as soon as I finish development of the item. This can be both good and bad. It is good that I already have a file that I can check-in to version control, but bad if QA finds a problem and I need to make multiple changes before it passes testing.

Another approach that I have to resort to is running Redgate Schema Compare software to identify all changes between development and test servers. Depending on developers working on the same database and number of unfinished changes it is possible to generate too many changes that do not need to promotion. Thus it is possible to spend extra time sifting through before creating correct change script. After having to do this process multiple times as a last resort. I have implemented a rule for each developer to generate their scripts. Otherwise, I have found myself spending almost entire day packaging their changes, and in the end, I still had to verify that work was complete.

After trying many things that did not seem to work, I realized that it is a combination of software and procedures that enable streamlined process for shipping database changes.

Software:

I have found through trial and error the following list of applications enabled me to achieve desired results.
  • Redgate Source Control - Very useful to keep track of all changes between builds. Moreover, very often I use it to rollback or troubleshoot my changes.
  • Redgate Schema Compare - Primary software I use to identify differences between development and test servers
  • SSMS 10.50 - Even though I use latest SSMS (14.0) for T-SQL development, whenever I need to generate DROP and CREATE script only SSMS 10.50 produces by default IF EXISTS DROP scripts for all objects.
Now before I go any further I would like to put a disclaimer about, DROP/CREATE of SQL objects. Since the application is connecting with specific SQL User, I do not have to worry about the issue with dropping permission in production. Additionally, because users are out of the application during the upgrade, I do not need to do anything special for handling multiple version of the same object running at the same time.

Procedures/Rules:

Now, these are not in any particular order.
  • Your own scripts - Each developer is responsible for generating their own scripts. If time permits I still verify it but I do not spend extended amount of time going over it.
  • Script files - First of all I require that only one SQL Object is modified in a file. This way if there is an issue with that file during deployment, the new version can be checked in to fix the problem.
  • File names - I have implemented the following standard.
    • 10_schema_objectname - this is used only for scripting changes to tables and other base object types.
    • 20_schame_objectname - this is used for all views because files applied in alphabetical order it is guaranteed that all base tables are in the database before any change to views.
    • 30_schema_objectname - this files include all data changes, like updates to lookup tables and other static data that is not usually changed from within application
    • 40_schema_objectname - this is done for all SP, FN and other SQL Server objects. If there is a dependency on another SP or FN, it is required to change a dependent object to a higher number like 41 or 42. In some cases of multiple nested dependencies, it is possible to go higher in numbers. Nonetheless, scripts arranged in a manner that ensures all dependent objects created in right order.
  • DROP/CREATE - All views, SP and FN are scripted as IF EXISTS DROP GO CREATE. This guarantees that if a script run on a client with version mismatch it does not fail. Alternative for other people who have to worry about losing permission in production system they can script IF NOT EXISTS CREATE GO ALTER

When it comes to actual deployment to Test and production servers, it is handled by application update program that runs scripts on the target server one by one in alphabetical order. Since we have clients running different versions, scripts always have to be applied in order, for example, if the customer is on version 1.5 before the could get 2.5 they need 2.0. This ensures that database changes are applied in correct order, and I don't have to worry about something breaking.

One last problem that I have to deal with on a regular basis is Version-drift. This is caused when I manually patch a client for a fix without going through the proper build process. In those cases, I just have to manually merge changes into development to guarantee that it will make it out to other clients. Once in a while, it becomes quite complicated to keep track of different clients running different versions and how to ensure that if they need a fix, it is not something that could be resolved through update versus manual code changes.

Conclusion

Even though this is not a full-proof plan, it seems to work for me and organization where I work. Over last 12 month of my employment, I have been part of multiple major and minor build of the software. I realize that it is possible that for some developers none of the things I have described are of any use while for others they can easily relate to them and possibly even adopt some into their process.

Thanks for reading. Don't forget to leave comments, questions or concerns. I always welcome constructive criticism.

No comments :

Post a Comment