I have been responsible for IT-Architectures and IT-Strategy between 1984 and 1997. From 1997 until now I have reviewed many Architectures and Strategies when I was part of Meta Group (now Gartner).
An IT-Architecture is a System that describes the components of a Software System on an Abstract Level.
An IT-Strategy is a Process that contains Stages. In every Stage a new version of the IT-Architecture is implemented.
A Well-Formed IT-Strategy is able to Adapt the Old Version of the Architecture. When you’re Strategy has failed You have to Start All over Again.
There are two types of Systems. The first type contains Systems. The second type I call a Foundation. It is the level where we think “the Real Things are Happening”. The major problem is to define the Level of the Foundation.
If you look at your Own computer the Foundation lies deeper than you think. It depends on “What You Understand About a Computer“.
We could define the Foundation as the Operating System of Your Computer (most likely a Microsoft Operating System) but below this Foundation other Foundations are in Existence.
At a more abstract level you can see the real Problem. The problem is Containing.
If you use the Containing Metaphor you Never Stop but You Have to Stop Somewhere.
The level where you Stop is the level where you give the responsibility to An-Other. This can be an Organization, A Person or when we dig deep enough even Nature.
When you give the responsibility for something to an-other you have to Trust the Other and the Other has to take Care of You.
The reason why Architectures fail is that they are based on a Foundation that is not stable on the long term.
Suddenly somebody starts to tinker with the Foundation and suddenly everything goes wrong. This happens all the time. The others leave you alone and you have to take care of yourself.
A solution was to create a Foundation that was able to withstand every Change at the Lower level.
This layer was called Middleware. It is situated somewhere between the UP and the DOWN of all the Layers. History has proven that this solution is not helpful.
Everything changes all the time.
I want to give you a Model to understand the complexity of the problem. I use a Horizontal and a Vertical layer, A Matrix. Layered architectures can be mapped on a Matrix, a Cube or a higher dimensional Structure, N-Dimensional Space.
Every component can be described by a point that is connected to N-variables. The links between the components are lines connecting the points.
The first thing we could do is use one dimension to describe the “type” of the component (software, hardware, database, form, etc). If we create a picture of this dimension we have created a System Diagram. There are many types of System Diagrams invented. They were called a Method or a Modeling Language. Every new Method created a “Method War” because the Users always thought that their Method was the Best.
I participated in many activities of the IFIP (International Federation for Information Processing). We tried to find a way to find the “Best Method” to Improve the Practice. It was proven that there was no best method.
Many roads lead to Rome. See “Information Systems Design Methodologies: Improving The Practice“, T.W. Olle, H.G. Sol and A.A. Verrijn-Stuart, (Eds.), North-Holland.
At the end the major Method Wars ended with a Compromise, UML. Compromises are always the worst solution for a problem. UML is a very complicated method.
If we start the Diagram with an Event and we have chosen the right Modeling Language, now called a Programming Language, we are able to “simulate” the System or to “generate” the software. There are many Tools, Modelers, Simulators, Languages and Generators developed.
They also created wars (called Competition) between all kinds of Vendors. In the end many Tools were taken out of the market by the Vendors and the Users got stuck. They were unable to convert the old tools to the new tools or they simply did not take the time to do this. This problem is called the Legacy Problem.
The Market invented something to solve this problem called Reverse Engineering. Reverse Engineering proved to be a failure because the semantics, the meaning, of the software was gone.
When you deconstruct a car and you show an engineer all the parts he knows the parts belonged to a car. When you do this with something nobody ever knew it existed the only engineer that is capable to reconstruct the original is the engineer who constructed it.
When the software is old the original programmer is gone and nobody is able to understand what he was doing. Sometimes the software contains documentation. The programmer has written a Story about the Meaning of the Software. Programmers never took (and take) the time to do this.
I want to get back to N-Dimensional Space. I hope you understand that we when we use enough dimensions we are able to Model Everything.
We are also able to MAP Everything to Everything. Mapping or Converting could solve many problems.
There were Systems on the market that helped you to MAP one Structure to another Structure. An example was Rochade. I implemented Rochade when I was responsible for IT-Architectures and I know Rochade solved many problems with Legacy Systems.
Rochade used something called a Scanner or Parser. A Parser is a “piece of software” that translates a “piece of software” into another “piece of software”. It stored the data of the software (the meta-data) in a “general” format that could be translated to other formats.
When you program in a Software Language the code is translated to another Language. This happens many times until the software reaches the Lowest Level, The Processor or the CPU.
The CPU uses a Cycle to process a very simple language that consists of binary numbers. These numbers are real numbers or operations. The simplest operations are operations on a Set.
The whole concept of the CPU was invented by John von Neumann and is therefore named the Von Neumann Architecture.
The architecture of von Neumann has a big disadvantage called the Von Neumann bottleneck. The CPU is continuously forced to wait.
The Von Neumann Computer is Wasting Time and Energy.
An alternative is the Parallel Architecture. Parallel computing has recently become the dominant paradigm in computer architectures. The main reason is the Rise of the Internet.
The Rise of the Internet started the Fall of Centralized IT-Architectures and Centralized IT-Strategy.
At this moment we need another approach to Manage or Control the sofware-production of a big company.
This approach can be found in the Open Source Movement.
If we use the Matrix Approach we can answer interesting questions.
First I introduce a Rule.
When we increase the amount of Dimensions we are able to make every point and connection between a point Unique. If we do the opposite we are able to make every point and connection The Same.
When people talk about the Reuse of a Component we are looking for a Dimension were some points and their connections are the Same.
I hope you see that it is possible to Reuse Everything and to Reuse Nothing. The Choice is Yours.
This also the Practice in Software. When I discovered that the Year-2000 problem could lead to a disaster I started a research-project with the CWI. The CWI develop a very intelligent parser that could create Software-Maps.
When we studied the maps we saw that some pieces of software came back all the time. These were “citations”. One programmer invented a new construct and others reused the construct all the time. The major difference with the Theory of Reuse was that MANY Parts were the Same.
When you dig deep enough you always find “The Same”.
The CPU is handling binary codes and when you would come from another planet you would not understand why all these zero’s and 1′s are creating a DIVERSITY. They create a DIVERSITY because Something is Interpreting the Sequence. This Something is also a Program. This program uses a Theory about Languages. Most of the time it supposes a Context Free Language. A Context Free Language is a language where the interpretor always moves in one direction. It processes a List.
The Diversity a Computer Produces is based on one long List of Binary patterns. If we could analyze all the possible patterns we could find all the possible software-programs that could be build until Eternity. Because the binary codes can be mapped to the Natural Numbers. We need only one dimension (A line) to classify all the possible software-components in the World.
In 1931 Gödel’s stated the so called incompleteness theorems. He uses the Natural Numbers to prove that a part of our Human Reality cannot be described by a Computer Program.
There is something “left for us” that the Machines cannot take over. This part is related to the Emotions and the Imagination. We cannot Automate them. If we do this we stop Innovation and Commitment.
Now I want to come back to IT-Architectures.
When You start an IT-Architecture Out of Nothing you start with a Small Amount of Dimensions. The world looks very simple to you. When you add detail you have to increase the amount of dimensions. The effect of this is that Everything Changes. New Possibilities arise. If you go on using the Top-Down Approach you will move into a State of Huge Complexity. Always start in the Middle!
At a certain moment You have to move to Reality.This means Programming Software. At that moment You encounter something you never thought of. The Software Legacy!!!!!!!!!!!!
When you increase the Scope of your System and you leave the Boundaries of Your Company (the World of the Internet) the Complexity increases also.At that moment You encounter something you never thought of, Open Source. Millions of Possibilities arise and You don’t Know what to do!
Behind the Software are People. Some of them are creating Small Companies and they are doing things you’re company is also doing but they do it much cheaper and faster.
What to do?
If You Can’t Beat them Join Them.