All posts by chris

Configuration Management – Part 7: The Time

An often overlooked aspect of configuration management is time. There are many scenarios where the correct value is dependent on time (or date for that matter). Typical examples are organizational structures (think “re-org”) or anything that is somehow related to legislation (every change in law or regulation becomes active at a certain time).

There are several approaches to deal with this situation:

  • Deployment: This is the simplest way and its ease comes with a number of drawbacks. What you basically do is make sure that a deployment happens exactly at the time when the changes should take effect. Obviously this collides with a typical CD (Continuous Deployment) setup. And while it is certainly possible to work around this, the remaining limitations usually make it not worthwhile. As to the latter, what if you need to process something with historical data? A good example is tax calculation, when you need to re-process an invoice that was issued before the new VAT rate got active. Next, how do you perform testing? Again, nothing impossible but  a case of having to find ways around it. And so on…
  • Feature toggle: Here the configuration management solution has some kind of scheduler to ensure that at the right point in time the cut-over to the new values is being made. This is conceptually quite similar to the deployment approach, but has the advantage of not colliding with CD. But all the other limitations mentioned above still apply. WxConfig supports this approach.
  • Configuration dimension: The key difference here is that time is explicitly modeled into the configuration itself. That means every query not only has the normal input, usually a key or XPath expression, but also a point in time (if left empty just take the current time as default). This gives full flexibility and eliminates the aforementioned limitations. Unfortunately, however, very few people take this into consideration when they start their project. Adding it later is always possible, but of course it comes with an over-proportional effort compared to doing it right from the start. WxConfig will add support for this soon (interestingly no one asked for it so far).

That was just a quick overview, but I hope it provided some insights. Comments are very welcome (as always).

 

webMethods Integration Server: Continuous Deployment

For more than nine years I have been working on a package for webMethods Integration Server. With the experience gained there, I want to discuss a number of aspects about Continuous Deployment.

Versioning

I recommend the use of semantic versioning, which at its core is about the following (for a lot of additional details, just follow the link):

  • The version number consists of three parts: Major, minor, and patch (example v1.4.2).
  • An increase in the major version indicates a non-backwards-compatible change.
  • An increase in the minor version indicates a backwards-compatible change.
  • A increase in the patch version indicates bug fixes only, no functional changes at all.

It is a well-known approach and makes it very easy for everyone to derive the relevant aspects from just looking at the version number. If the release in question contains bug fixes for something you have in use, it is probably a good idea to have a close look and check if a bug relevant to you was fixed. If it is a minor update and thus contains improvements while being backwards compatible, you may want to start thinking about a good time to make the switch. And if it is major update that (potentially) breaks things, a deeper look is needed.

Each Integration Server package has two attributes for holding information about its “version”. One is indeed the version itself and the other is the build. The latter is by default populated with in auto-increase number, which I find not very helpful. Yes, it gives me a unique identifier, but one that does not hold any context. What I put into this field instead is a combination of a date-time-stamp and the change set identifier from the Version Control System (VCS). This allows me to see at a glance when this package was built and what it contains.

Build

Conceptually the work on a single package, as opposed to a set of multiple ones that comprise one application, is a bit different in that you simply deal with only one artifact that gets released. In many projects I have seen, people take a slightly different approach and see the entirety of the project as their to-be-released artifact. This approach is supported by how the related tools (Asset Build Environment and Deployer) work: You simply throw in the source code for several packages, create an archive with metadata (esp. dependencies), and deploy it. Of course you could do this on a per-package basis. But it is easier to have just one big project for all of them.

Like almost always in live, nothing comes for free. What it practically means is that for every change in only one of the potentially many packages of the application, all of them need a re-deployment. Suppose you have an update in a maintenance module that is somewhat unrelated to normal daily operation. If you deploy everything in one big archive, this will effectively cause an outage for your application. So you just introduced a massive hindrance for Continuous Deployment. Of course this can be mitigated with blue-green deployments and you are well advised to have that in place anyway. But in reality few customers are there. What I recommend instead is an approach where you “cut” your packages in such a way that they each of them performs a clearly defined job. And then you have discrete CI job for each of them, of course with the dependencies taken into account.

Artifact Storage

Once your build has been created, it must reside somewhere. In a plain webMethods environment this is normally the file system, where the build was performed by Asset Build Environment (ABE). From there it would be picked up by Deployer and moved to the defined target environment(s). While this has the advantage of being a quite simple setup, it also has the downside that you loose the history of your builds. What you should do instead, is follow the same approach that has been hugely successful for Maven: use a binary repository like Artifactory, Nexus, or one of the others (a good comparison can be found here). I create a ZIP archive of the ABE result and have Jenkins upload it to Artifactory using the respective plugin.

To have the full history and at the same time a fixed download location, I perform this upload twice. The first contains a date-time-stamp and the change set identifier from the Version Control System (VCS), exactly like for the package’s build information. This is used for audit purposes only and gives me the full history of everything that has ever been built. But it is never used for actually performing the deployment. For that purpose I upload the ZIP archive a second time, but in this case without any changing parts in the URL. So it effectively makes it behave a bit like a permalink and I have a nice source for download. And since the packages themselves also contain the date-time stamp and change set identifier, I still know where they came from.

Deployment

Depending on your overall IT landscape there are two possible approaches for handling deployments. The recommended way is to use a general-purpose configuration management tool like Chef, Puppet, Ansible, Salt, etc. This should then also be the master of your webMethods deployments. Just point your script to your “permalink” in the binary reposiory and take it from there. I use Chef and its remote file mechanism. The latter nicely detects if the archive has changed on Artifactory and only then executes the download and deployment.

You can also develop your own scripts to do the download etc., and it may appear to be easier at a first glance. But there is a reason that configuration management tools like Chef et al. have had such success over the last couple years, compared to home-grown scripts. In my opinion it simply does not make sense to spend the time to re-invent the wheel here. So you should invest some time (there are many good videos on YouTube about this topic) and figure out which system is best for your needs. And if you still think that you will be faster with your own script, chances are that you overlooked some requirements. The usual suspects are logging, error handling, security, user management, documentation, etc.

With either approach this makes deployment a completely local operation and that has a number of benefits. In particular you can easily perform any preparatory work like e.g. adjusting content of files, create needed directories, etc.

Summary

All in all, this approach has worked extremely well for me. While it was first developed for an “isolated” utility package, it has proven to be even more useful for entire applications, comprised of multiple packages; in other words, it scales well.

Another big advantages is separation of concerns. It is always clear which activity is done by what component. The CI server performs the checkout from VCS and orchestrates the build and upload to the binary repository. The binary repository holds the deployable artifact and also maintains an audit trail of everything that has ever been built. The general-purpose configuration management tool orchestrates the download from the binary repository and the actual deployment.

With this split of the overall process into discrete steps, it is easier to extend and especially to debug. You can “inject” additional logic (think user exits) and especially implement things like blue-green deployments for a zero-downtime architecture. The latter will require some upfront thinking about shared state, but this is a conceptual problem and not specific to Integration Server.

One more word about scalability. If you have a big-enough farm of Integration Servers running (and some customers have hundreds of them), the local execution of deployments also is much faster than doing it from a central place.

I hope you find this information useful and would love to get your thoughts on it.

Offshoring and Knowledge Depletion

I have recently watched a very interesting presentation by Gunter Dueck (former CTO of IBM in Germany), where he talks about the future of work. When looking at how many consulting organizations structure their business, you can see a trend that seemingly high-level parts, usually the interaction with the customer and where business domain knowledge is required, stay in your own country. And those aspects that can, allegedly, be performed by less qualified people are moved to countries with lower salaries.

This may seem attractive from a cost-management point of view for a while. And it certainly works right now – to the extent possible. Because the organizations still have people locally who had been “allowed” to start with relatively simple coding tasks and grow into the more complex areas of the business. But what if those folks retire or leave? How do you get people to the level of qualification that allows them to understand the customer’s business on the one hand, and at the same time be able to have an intelligent conversation with the offshore folks who do the coding? Because for the latter you must have done coding yourself extensively.

Trying to Help

Quite recently I came across a number of texts (e.g. “Is Giving the Secret to Getting Ahead?“), which suggest that trying to be helpful to others is not only altruistic, but also helps the giving person. This is certainly true for me, so let me tell you my story.

My employer is running a number of internal mailing lists that cover all our products and are used by folks to discuss all sorts of (mainly technical) stuff. I have been subscribed to many of those lists for close to ten years now and they helped me to learn an awful lot about our products and their real-world application.

Most of this information cannot be found in official documentation or training, simply because it is relevant only in a very specific context. What I realized, though, is that it is exactly this context-specificity, which helped me understand the products better. Because if you discuss, along with your own use-case, the border conditions, you develop a “feeling” how the software works internally. And this, in turn, allows you to analyze totally new use-cases.

So on average I spend about 30-60 minutes per day on those mailing lists (you typically get 150-200 mails per day in total). Some of them I just scan through rather quickly. But others I follow much more closely. And of course after some time I had identified various folks that stood out in terms of amount and quality of their contributions. Many of them I have never met personally in all those years because they live on another continent. But still we have a relationship.

After some time, when I had learned enough from following the lists as well as my actual work with some of the tools, I started answering questions myself. Soon enough people knew my areas of expertise and also added my personal email address next to the mailing lists when sending their questions, to make sure I would take a look.

So apart from my ego, how did helping others on the mailing lists help me? The obvious thing is that I am known to be an expert on various topics (e.g. ALM, performance, HA) for our products. But more importantly, when I have a question, people are willing to spend time and return the favor. Because, on the other hand, we all know the lone wolf character who only asks, but never helps others. And guess how many replies they usually get …

The same approach also works on public mailing lists, by the way. In the mid-1990s I had basically the same experience in the Novell NetWare group of the German FidoNet (basically a proprietary version of UseNet). I was a university student then and also had my own small company. Following the group was critical to building up my NetWare expertise, which I then used to charge a nice (at the time) hourly rate.

In conclusion I cannot overestimate the role that helping others played on my professional success. Plus, it feels really good :-).

Configuration Management – Part 6: The Secrets

Every non-trivial application needs to deal with configuration data that require special protection. In most cases they are password or something similar. Putting those items into configuration files in clear is a pretty bad idea. Especially so, because these configuration files are almost always stored in a VCS (version control system), that many people have access to.

Other systems replace clear-text passwords found in files automatically with an encrypted version. (You may have seen cryptic values starting with something like {AES} in the past.) But apart from the conceptual issue that parts of the files are changed outside the developer’s control, this also not exactly an easy thing to implement. How do you tell the system, which values to encrypt? What about those time periods that passwords exist in clear text on disk, especially on production systems?

My approach was to leverage the built-in password manager facility of webMethods Integration Server instead. This is an encrypted data store that can be secured on multiple levels, up to HSMs (hardware security module). You can look at it as an associative array (in Java usually referred to as map) where a handle is used to retrieve the actual secret value. With a special syntax (e.g. secretValue=[[encrypted:handleToSecretValue]]) you declare the encrypted value. Once you have done that, this “pointer”will of course return no value, because you still need to actually define it in the password manager. This can be done via web UI, a service, or by importing a flat file. The flat file import, by the way, works really well with general purpose configuration management systems like Chef, Puppet, Ansible etc.

A nice side-effect of storing the actual value outside the regular configuration file is that within your configuration files you do not need to bother with the various environments (add that aspect to the complexity when looking at in-file encryption from the second paragraph). Because the part that is environment-specific is the actual value; the handle can, and in fact should, be the same across all environments. And since you define the specific value directly within in each system, you are already done.

Too Qualified to Code

There is an “interesting” perception of software development in that quite a few people think it is an activity suitable for graduates but not highly qualified professionals. I am in violent disagreement with this. But since it is a widespread belief, just dismissing it as dumb does not help very much. So how does it come that so many people, who have a somewhat limited knowledge about the subject, make such a judgement and act according to it?

From a psychological point of view there seems to be a mechanism in place that either makes those people believe that they actually understand the subject enough to make an informed decision. Or they do not see it as important enough to invest more time in the whole process and just go for the easy way. Or something in the middle? I will look at some thoughts I have had on the topic over the last couple of years and hope they make sense to you.

If you look at how most companies handle careers, you will soon realize that many talk about career paths for managers and professionals. And they are usually eager to say that both are valued equally in their organization. Well, I have yet to see a place where this statements, typically coming from HR, survives the reality check. If the entire organization is run such that managers from day one experience that the “techies” (sometimes lovingly referred to as code monkeys) are just there to serve management without questioning its wisdom, guess what happens.

Most professionals will simply do as they are told and not start a discussion why something is possibly not as easy or brilliant as it looks from the business side. Managers, and that is absolutely the techies’ fault, will then start to believe that they know enough about technology to decide on details. And the next time, when they do so and the techie just nods and goes away to deliver, they expect a good result. If the result is not so good though, because many details were not taken into account, it is indeed the techie’s fault to not have brought them up. And, voila, there you have the vicious circle.

I must say that I have been fortunate enough so far and not run into that situation. But that not only requires experience and the “standing” in the organization, it also depends on your financial independence. I live in Germany and we have laws against my employer just firing me, because I am not obedient enough but quite a nuisance instead. So in that respect I am just lucky. But your standing with management is something you can and should control. If you are too quiet and never give them a chance to learn how thoughtful and interested in the company’s progress you are, how should they know?

What has worked nicely for me is not trying to be someone else but find opportunities where I was on my home turf and still talk about something they were interested in. And in a nutshell the message I got across was: I understand and want to support what you need to achieve; and for the technical details you can rely on my experience and let me decide on the nitty-gritty stuff, which you are not interested in anyway.

And with this you are not talking about coding any more but supporting the business. Yes, actually performing this support means coding. But once people see that they are much better off with you doing this, rather than someone less experienced, they will happily accept it.

So how can you proof that your experience is relevant to the business? By keeping your promises and deliver on time, quality and budget. Because you have gone through the learning curve and know all the potential pitfalls that can happen for a given task. Having them taken care of upfront and not let surprise anyone, you keep things under control. Just a few weeks ago one of my projects went live and my counterpart from the business side was thrilled that there was only one bug discovered, and fixed immediately, so far.

Fixing things quickly and effectively is, by the way, one of the most powerful tools in your arsenal. Nobody expects a bug-free software, especially if it is custom development. But what people want, and rightly so, is that you can fix things fast. So invest time and have a system in place that allows you to ship fixes in no time and without interrupting normal operations. If you need to tell the business that a two-hour downtime is required for you to bring a fix into production, you are really not worth your salary.

That’s it for now. I know that I barley scratched the surface of a very diverse topic. But everything needs to start somewhere …

Configuration Management – Part 5: The External World

A standard requirement in configuration management is using values that already exist somewhere else. The most obvious places are environment variables (defined by the operating system) and Java system properties. Other are existing (!) databases, e.g. from an ERP system, or files.

The reason for referencing values directly from their original sources instead of duplicating them in a copy-paste fashion is the Don’t-Repeat-Yourself (DRY) principle. Although the latter is typically discussed in the context of code, it applies to configuration at least as well. We all know those “great” applications, which require manual updates of the same value in different places. And if you miss only one, all hell breaks loose.

For re-use of values within a file, the standard approach is variable interpolation. A well-known syntax for that comes from the build tool Apache Ant, where ${variableName} can be placed anywhere into a value assignment. Apache Commons Configuration, and therefore also WxConfig, support this syntax. In WxConfig you can even reference values from other files that belong to the same Integration Server package.

But what to do for other sources? Well, for the aforementioned environment variables and Java system properties, the respective interpolators from Apache Commons Configuration can also be used in WxConfig. But the latter also defines several interpolators of its own.

  • Cross package: Assuming you have an Integration Server package that holds some general values (often referred to as “global”), those can be referenced. (There are multiple ways to deal with truly global values and the specifics really determine which one is used best.)
  • Current date/time: Gets the current date and/or time, which is admittedly a bit of an edge-case. One could argue that this is not really a configuration value, and that is correct. However, there are scenarios when the ability to quite easily have such a value comes in very handy. Think about files that get created during processing of data. Instead of manually concatenating the path, the base filename, the date/time stamp and the file suffix in the code, you could just have something like this in your configuration file: workerFile=${tmpDir}/appFile_${date:yyyyMMdd-HHmmssSSS}.dat Doesn’t that make things a lot cleaner to read?
  • File content: Similar to date/time this was added primarily for convenience and clarity reasons. The typical use-case are templates, where a file (possibly maintained by another application) is just read directly into a configuration value.
  • Code invocation: When all else fails, you can have an arbitrary piece of logic be executed and the result be placed into the configuration value.

It is worth noting that all interpolators resolve at invocation. So you always get current results. In the case of the file content interpolator, this has the downside of file I/O; but if that really becomes a bottleneck, you are still not worse off than when having the read somewhere else in the code. And the file system caching is also still there …

With this set of options, pretty much all requirements for a redundancy-free configuration management can be met.

Mac OS X Mavericks: Mouse Right-Click Stopped Working

Not sure how many people still use Mac OS X Mavericks, but anyway. I had just had the issue that the right-click had stopped working, on the mouse as well as the trackpad. There are many posts on the subject, so it seems a not too uncommon problem. The one that helped me suggested to simply turn the mouse off and on. Strangely enough, this fixed it for both, mouse and trackpad. Thanks!

Martin Kleppmann – Conflict Resolution for Eventual Consistency

Here is a rather interesting video from Martin Kleppmann where he talks about dealing with concurrent changes to data. While the title may sound theoretical to some, it is a topic that probably every developer has come across. And here is also the link to the paper with the algorithm presented. If you are interested in an implementation, check this Github project.