Here is another interesting presentation from Martin given at the Code Mesh Conference 2015 in London.
Quite recently I heard a statement similar to
“The application works, so there is no need to consider changing the architecture.”
I was a bit surprised and must admit that in this situation had no proper response for someone who obviously had a view so different from everything I believe in. But when you think about it, there is obviously a number of reasons why this statement was a bit premature. Let’s have a look at this in more detail.
There are several assumptions and implicit connotations, which in our case did not hold true. The very first is that the application actually works, and at the time that was not entirely clear. We had just gone through a rather bumpy go-live and there had not yet been a single work item processed by the system from start to finish, let alone all the edge cases covered. (We had done a sanity test with a limited set of data, but that had been executed by folks long on the project and not real end users.) So with all the issues that had surfaced during the project, nobody really knew how well the application would work in the real world.
The second assumption is that the chosen architecture is a good fit for the requirements. From a communication theory point of view this actually means “a good fit for what I understand the requirements to be”. So you could turn the statement in question around and say “You have not learned anything new about the requirements since you started the implementation?”. Because that is what it really means: I never look back and challenge my own thoughts or decisions. Rather dumb, isn’t it?
Interestingly, the statement was made in the context of a discussion about additional requirements. So there is a new situation and of course I should re-evaluate my options. It might indeed be tempting to just continue “the old way” until you really hit a wall. But if that happens you have consciously increased sunk costs. And even if you can “avoid the wall”, there is still a chance that a fresh look at things could have fostered a better result. So apart from the saved effort (and that is only the analysis, not a code change yet) you can only loose.
The next reason are difficulties with the original approach and of that there had been plenty in our case. Of course people are happy that things finally sort-of work. But the more difficulties there have been along the way, the bigger the risk that the current implementation is either fragile or still has some hidden issues.
And last but not least there are new tools that have become available in the meantime. Whether they have an architectural impact obviously depends on the specific circumstances. And it is a fine line, because there is always temptation to go for the new, cool thing. But does it provide enough added value to accept the risks that come with such a switch? Moving from a relational database to one that is graph-based, is one example that lends itself quite well to this discussion. When your use-case is about “objects” and their relationships with one another (social networks are the standard example here), the change away from a relational database is probably a serious option. If you deal with financial transactions, things look a bit different.
So in a nutshell here are the situations when you should explicitly re-evaluate your application’s architecture:
- Improved understanding of the original requirements (e.g. after the first release has gone live)
- New requirements
- Difficulties faced with the initial approach
- New alternatives available
So even if you are not such a big fan of re-factoring in the context of architecture, I could hopefully show you some reasons why it is usually the way to go.
A good talk on EDA (event-driven architecture).
Here is a rather interesting video from Martin Kleppmann where he talks about dealing with concurrent changes to data. While the title may sound theoretical to some, it is a topic that probably every developer has come across. And here is also the link to the paper with the algorithm presented. If you are interested in an implementation, check this Github project.
In the closing statement of my post Architects Should Code I said that for me code and architecture are just two ways to look at the same thing. It seems that I am not alone in that perception 🙂 and can very much recommend the video linked in below. I found its start a bit boring, but am very happy, in hindsight, to have not switched away.
There is a widespread notion, that developers at some point in their career evolve into something “better”, called architect. This term has the connotation of mastery on the subject, which I think is ok. What is not ok for me, is that in many cases there is the expectation that an architect’s time is too valuable for something as mundane as coding. Instead architects should develop the overall architecture of the solution.
Ok, so what is the architecture? Most people believe that some documentation (often a mixture of prose and high-level pictures) is the architecture. I would argue that this not the architecture but a more or less accurate abstraction from it. When enough work has gone into things, it may may even become the to-be architecture (but certainly not the as-is one). However, outside of regulated industries I have never seen a document that was thorough enough. A friend of mine used to work on embedded systems for car breaks where lives are at stake; and he told me some interesting stories about the testing and documentation efforts these guys take.
In my view the architecture is, by definition, in the code itself. Everything else is, I repeat myself, something that has some relationship with it. So how can an architect not be coding? You could argue that instead of doing the work him- or herself, mentoring and guiding less experienced developers is a good use of the architect’s time. For me that works fine up to a certain level of complexity. But if we talk about an STP (straight-through processing) solution, is that really something you want to give to a mid-level developer? (It would probably be an ideal piece of work for pair-programming, though.)
I do certainly not want to demean people who call themselves architects. It is a not-so-common capability to look at an IT landscape and see the big picture. Instead many people will be lost in the details. So we definitely need this this perspective! But it is a different kind of architecture, the so-called Enterprise Architecture (EA). I know folks who started as (really good) developers and are now very successful at Enterprise Architecture.
So, in closing, my core point is that the architecture of a solution and its code are basically two sides of the same coin. Everybody involved on the technical level should understand both aspects. And if the level of detail varies, depending on the official role, that is probably ok.
Here is yet another interesting video. The title is chosen badly, though, as the content is not really about the future but the history of programming. But on the other hand you need to understand the past, if you want to avoid repeating its failures.
I recently started a new hobby project (it is still in stealth mode, so no details yet) and went through the exercise to really carefully think about what technology to use for it. On a very high level the requirements are fairly standard: Web UI, persistence layer, API focus, cross-platform, cloud-ready, continuous delivery, test automation, logging, user and role management, and all the other things.
Initially I was wondering about the programming language, but quickly settled for Java. I have reasonable experience with other languages, but Java is definitely where most of my knowledge lies these days. So much for the easy part, because the next question proved to be “slightly” more difficult to answer.
Looking at my requirements it was obvious that developing everything from the ground up would be nonsense. The world does not need yet another persistence framework and I would not see any tangible result for years to come, thus loosing interest to soon. So I started looking around and first went to Spring. There is a plethora of tutorials out there and they show impressive results really quickly. Java EE was not really on my screen then, probably because I still hear some former colleagues complain about J2EE 1.4 in the back of my mind. More importantly, though, my concern was more with agility (Spring) over standards (JEE). My perception with too many Java standards is that they never outgrow infancy, simply because they lack adoption in the real world. On the other hand Spring was created to solve real-world problems in the first place.
But then, when answering a colleague’s question about something totally different, I made the following statement:
I tend to avoid convenience layers, unless I am 100% certain that they can cope with all future requirements.
All to often I have seen that some first quick results were paid for later, when the framework proved not to be flexible enough (I call this the 4GL trap). So this cautioned myself and I more or less went back to the drawing board: What are the driving questions for technology selection?
- Requirements: At the beginning of any non-trivial software project the requirements are never understood in detail. So unless your project falls into a specific category, for which there is proven standard set of technology, you must keep your options open.
- Future proof: This is a bit like crystal ball gazing, but you can limit the risks. The chances are bigger that a tier-3 Apache project dies than an established (!) Java standard to disappear. And of course this means that any somewhat new and fancy piece must undergo extreme scrutiny before selecting it; and you better have a migration strategy, just in case.
- Body of knowledge: Sooner or later you will need help, because the documentation (you had checked what is available, right?) does not cover it. Having a wealth of information available, typically by means of your favorite search engine, will make all the difference. Of course proper commercial support from a vendor is also critical for non-hobby projects.
- Environment: Related to the last aspect is how the “landscape” surrounding your project looks like. This entails technology but even more importantly the organization which has evolved around that technology. The synergies from staying with what is established will often outweigh the benefits that something new might have when looked at in isolation.
On a strategic level these are the critical questions in my opinion. Yes, there are quite a few others, but they are more concerned with specifics.
Every once in a while someone is rolling their eyes when I, again, insist on a well-chosen name for a piece of software or an architectural component. And the same also goes for the text of log messages, by they way; but let’s stick with the software example for now.
Well, my experience with many customers has been the following, which is why I think names are important: As soon as the name “has left your mouth” the customer will immediately and sub-consciously create an association in his mind what is behind it. This only takes a second or two, so it is finished before I even start to explain what a piece of software does.
Assuming that my name was chosen poorly, and hence his idea about the software’s purpose is wrong, he will then desperately try to match my explanation with his mental picture. Obviously this will not be successful and after some time (hopefully just a few minutes), he will interrupt me and say that he doesn’t understand and shouldn’t the software actually be doing this and that.
It makes the conversation longer than necessary and, more importantly, creates some friction; the latter is hopefully not too big, but esp. at the beginning of a project when there is no good personal relationship yet, it’s something you want to avoid. Also, think about all the people who just read the name in a document or presentation and don’t have a chance to talk with you. They will run around and spread the (wrong) word. I have been on several projects where bad names created some really big problems for the aforementioned reasons.
I can honestly say that I when I wrote my post about ALM and middleware, I hadn’t heard about the Open Services for Lifecycle Collaboration initiative. But it is exactly the kind of thing I had in mind. These guys are working on the definition of a minimum (but expandable) set of features and functions that allow easy integration between the various tools, which can usually be found in an organization. To my knowledge no products exist yet, but I really like the idea and approach.