Continuous Delivery within the .NET realm
Continuous what?
Well, if you browse the internet regularly, you will encounter two different terms that are used rather inconsistently: Continuous Delivery and Continuous Deployment. In my words, Continuous Delivery is a collection of various techniques, principles and tools that allow you to deploy a system into production with a single press of a button. Continuous Deployment takes that to the next level by completely automating the process of putting some code changes that were committed to source control into production, all without human intervention. These concepts are not trivial to implement and involve both technological innovations as well some serious organizational changes. In most projects involving the introduction of Continuous Delivery, an entire cultural shift is needed. This requires some great communication and coaching skills. But sometimes it helps to build trust within the organization by showing the power of technology. So let me use this post to highlight some tools and techniques that I use myself.
What do you need?
As I mentioned, Continuous Delivery involves a lot more than just development effort. Nonetheless, these are a few of the practices I believe you need to be successful.
- As much of your production code as possible must be covered by automated unit tests. One of the most difficult part of that is to determine the right scope of those tests. Practicing Test Driven Development (TDD), a test-first design methodology, can really help you with this. After trying both traditional unit testing as well as TDD, I can tell you that is really hard to add maintainable and fast unit tests after you've written your code.
- If your system consists of multiple distributed subsystems that can only be tested after they've been deployed, then I would strongly recommend investing in acceptance tests. These 'end-to-end' tests should cover a single subsystem and use test stubs to simulate the interaction with the other systems.
- Any manual testing should be banned. Period. Obviously I realize that this isn't always possible due to legacy reasons. So if you can't do that for certain parts of the system, document which part and do a short analysis on what is blocking you.
- A release strategy as well as a branching strategy are crucial. Such a strategy defines the rules for shipping (pre-)releases, how to deal with hot-fixes, when to apply labels what version numbering schema to use.
- Build artifacts such as DLLs or NuGet packages should be versioned automatically without the involvement of any development effort.
- During the deployment, the administrator often has to tweak web/app.config settings such as database connections strings and other infrastructure-specific settings. This has to be automated as well, preferably by parametrizing deployment builds.
- Build processes, if they exist at all, are quite often tightly integrated with build engines like Microsoft's Team Build or JetBrain's Team City. But many developers forget that the build script changes almost as often as the code itself. So in my opinion, the build script itself should be part of the same branching strategy that governs the code and be independent of the build product. This allows you to commit any changes needed to the build script together with the actual feature. An extra benefit of this approach is that developers can test the build process locally.
- Nobody is more loathed by developers than DBAs. A DBA that needs to manually review and apply database schema changes is a frustrating bottleneck that makes true agile development impossible. Instead, use a technique where the system uses metadata to automatically update the database schema during the deployment.
What tools are available for this?
Within the .NET open-source community a lot of projects have emerged that have revolutionized the way we build software.
- OWIN is an open standard to build components that expose some kind of HTTP end-point and that can be hosted everywhere. WebAPI, RavenDB and ASP.NET Core MVC are all OWIN based, which means you can build NuGet packages that expose HTTP APIs and host them in IIS, a Windows Service or even a unit test without the need to open a port at all. Since you have full control of the internal HTTP pipeline you can even add code to simulate network connectivity issues or high-latency networks.
- Git is much more than a version control system. It changes the way developers work at a fundamental level. Many of the more recent tools such as those for automatic versioning and generating release notes have been made possible by Git. Git even triggered de-facto release strategies such as GitFlow and GitHubFlow that directly align with Continuous Delivery and Continuous Deployment. In addition to that, online services like GitHub and Visual Studio Team Services add concepts like Pull Requests that are crucial for scaling software development departments.
- XUnit is a parallel executing unit test framework that will help you build software that runs well in highly concurrent systems. Just try to convert existing unit tests built using more traditional test frameworks like MSTest or Nunit to Xunit. It'll surface all kinds of concurrency issues that you normally wouldn't detect until you run your system in production under a high load.
- Although manual testing of web applications should be minimized and superseded by JavaScript unit tests using Jasmine, you cannot entirely get rid of a couple of automated tests. These smoke tests can really help you to get a good feeling of the overall end-to-end behavior of the system. If this involves automated tests against a browser and you've build them using the Selenium UI automation framework, then BrowserStack would be the recommended online service. It allows you to test your web application against various browser versions and provides excellent diagnostic capabilities.
- Composing complex systems from small components maintained by individual teams has been proven to be a very successful approach for scaling software development. MyGet offers (mostly free) online NuGet-based services that promotes teams to build, maintain and release their own components and libraries and distribute using NuGet, all governed by their own release calendar. In my opinion, this is a crucial part of preventing a monolith.
- PSake is a PowerShell based make-inspired build system that allows you to keep your build process in your source code repository just like all your other code. Not only does this allow you to evolve your build process with new requirements and commit it together with the code changes, it also allows you to test your build in complete isolation. How cool is it to be able to test your deployment build from your local PC, isn't it?
- So if your code and your build process can be treated as first-class citizens, why can't we do the same to your infrastructure? You can, provided you take the time to master PowerShell DSC and/or modern infrastructure platforms like TerraForm. Does your new release require a newer version of the .NET Framework (and you're not using .NET Core yet)? Simply commit an updated DSC script and your deployment server is re-provisioned automatically.
Where do you start?
By now, it should be clear that introducing Continuous Delivery or Deployment isn't for the faint of heart. And I didn’t even talk about the cultural aspects and the change management skills you need to have for that. On the other hand, the .NET realm is flooded with tools, products and libraries that can help you to move in the right direction. Provided I managed to show you some of the advantages, where do you start?
- Switch to Git as your source control system. All of the above is quite possible without it, but using Git makes a lot of it a lot easier. Just try to monitor multiple branches and pull requests with Team Foundation Server based on a wildcard specification (hint: you can't).
- Start automating your build process using PSake or something alike. As soon as you have a starting point, it'll become much easier to add more and more of the build process and have it grow with your code-base.
- Identify all configuration and infrastructural settings that deployment engineers normally change by hand and add them to the build process as parameters that can be provided by the build engine. This is a major step in removing human errors.
- Replace any database scripts with some kind of library like Fluent Migrator or the Entity Framework that allows you to update schema through code. By doing that, you could even decide to support downgrading the schema in case a (continuous) deployment fails.
- Write so-called characterization tests around the existing code so that you have a safety net for the changes needed to facilitate continuous delivery and deployment.
- Start the refactoring efforts needed to be able to automatically test more chunks of the (monolithical) system in isolation. Also consider extracting those parts into a separate source control project to facilitate isolated development, team ownership and a custom life cycle.
- Choose a versioning and release strategy and strictly follow it. Consider automating the version number generation using something like GitVersion.
Let's get started
Are you still building, packaging and deploying your projects manually? How much time have you lost trying to figure out what went wrong, only to find out you forgot some setting or important step along the way? If this sounds familiar, hopefully this post will help you to pick up some nice starting points. And if you still have question, don't hesitate to contact me on twitter or by reaching out to me at TechDays 2016.
Leave a Comment