Archive for the ‘Computer and Communication Technology’ Category

How to Create the Perfect Need-Machine by Analyzing Personal Activity Patterns

Wednesday, March 26th, 2008

In the approach of Taylor and Ford, the employees and customers are treated as programmable machines.  The focus was on a perfect coordination of the senses, the muscles and the production system (the assembly line).  The emotions and the imagination were neglected.

In mass customization, the emotions are involved. In customer innovation, the imagination is imperative. In a demand oriented system all the parts of the human cognitive system have to play a role in a coherent and balanced way.

The human body acts on its environment with messages and action-patterns. The incoming and outgoing messages are observed by the senses and transformed to an internal format. The internal communication system sends the messages to the appropriate place in the body. The emotions are always looking for danger. They want to control the priority of the actions to make it possible for the body to react immediately. The imagination creates an image of the outside world and helps the body to generate scenario’s to improve its action-patterns. 

The senses are the connection to the physical outside world. They shield the human being from the enormous amount of signals that are trying to enter the body. They filter incoming data and transform the data in a standard internal format. When the senses detect an event, it is evaluated by the emotions. If the event is not important, nothing happens. It the event is unusual it becomes aware in the conscious. Events that are highly repeating are not noticed after some time. An internal program (an action-pattern) automates the handling of the event.

The muscles act in physical space. They acquire an enormous amount of reaction-patterns by repeated practicing. Humans learn from their failures. When the senses detect an event, many appropriate patterns are located and enabled. 

When the patterns enter mental space, they change into models. Complicated patterns are compressed into models. Humans use all kinds of compression techniques to make the world compact and therefore more understandable. Static models (e.g. an organization contains employees) compress the world in wholes (nouns) and parts (attributes). They create identities. Dynamic models (the employee sells a product) compress causal chains (event, actor, result). They make it possible to reason.

Models behave the same way as sensors do in physical space. They shield the mental space of the human being from the enormous amount of ideas that the imagination is producing.

The emotions act on hostile and friendly forces. They shield the body from physical injuries (avoiding pain) and take care of the self re-production process of the body (looking for food and a sexual partner).

The emotional system determines the amount of resources that is allocated to the evaluation and the search for adequate action patterns. If an event is dangerous, all resources in the body are used. The body reacts without thinking and uses a biological inherited and fast pattern (fight, flight, freeze, the primary emotions). If there is enough time to react, the emotional system evaluates its preferences and enables the preferred actions-patterns.

If the preferences are related to a long-term perspective, they enter mental space and the human has a choice to make. In the evaluation of long-term preferences, the other plays an important role. People want to take care of the other (family, friends, children), are afraid to get in to a conflict (dominance, status) and want to be praised by the other for what they are accomplishing.

Humans imagine (by creating pictures connected with feelings) what events they like to happen (a wish). When they are pessimistic, they imagine what events they do not want to happen (a fear).  The imagination is the innovative part of the human mental space that generates all kinds of new connections (ideas). The imagination is also the most free to play with new ideas. People can simulate and practice in their imagination without getting into trouble. The imagination produces the idea of the identity.

The imagination uses visual metaphors to create an understandable world. On the lowest level the metaphors are connect to the action patterns. The image of a cup is connected to picking up the cup, holding the cup and moving the cup. New structures are blended with old familiar structures.

Many metaphors make use of the human understanding of technology.  Freud based his theory of the unconsciousness on his understanding of the steam-machine (“I am steamed up with emotions”). Many theories of the mind are based on the metaphor of the computer. People always relate new phenomena to something they already understand. They sometimes do this (in the eyes of others) in very strange ways.  A skilful teacher knows this and tries to find the bridge (the right metaphor, a story) between his world and the world of the student.

In the human body, all the sub-systems (e.g. the services, the organs) are connected by shared communication-channels. There are fast (the nervous system) and slow reacting shared channels (the endocrine system). All the sub-systems use specific messenger-molecules to communicate their actions and act on incoming messengers.  Messengers materialize with every thought we create and with every emotion we feel. When a messenger enters the boundary of a sub-system, (e.g. a cell) it triggers messengers that are specific for that sub-system.

The action patterns make the muscles move according to a movements-plan that is stored in memory. The movement-plans of the muscles enable people to walk, to work (using tools) and to talk. In this last case, people communicate their intentions. The human communication contains a complicated mix of signals that are related to the emotions (e.g. visual expressions, gestures), the patterns (assertions) and the imagination (visual images, ideas).

People resist change. The patterns they have acquired control their behavior and determine their potential. People do not want to change their patterns dramatically. They want to acquire new patterns (by doing) without noticing the change. Only a major event (a critical moment), mostly with negative impact, can have a radical effect. If this event happens it takes a very long time to recover and get into harmony again. When people have to adjust their patterns too often, they experience stress and on the long run get sick.

If people cannot adjust their patterns, they have to involve the other parts of the cognitive system. When they involve the emotions, they have to set priorities and make a choice. People do not like making choices. They are incapable of evaluating all the possibilities. They can also make use of the senses and look at the real opportunities in the outside world. People are almost incapable of doing this because their imagination produces the images it wants to see. If the imagination really faces the facts, the identity is attacked. It feels powerless and unable to control his path of destiny. The last possibility a human has is to adjust the imagination. He has to realize that the possibilities he imagined were just illusions.

If everything stays the same, people get bored. They hope that an event will occur that relates to their wishes. People are the most satisfied if their environment produces just enough change (a challenge) they can cope with. They want a balance between the will (what they want, the imagination, variation) and their capabilities (what they are able to do, predictability, the patterns, their skills).

In a perfect demand oriented economy, a supplier has to provide a challenge to the customer. To provide this challenge the supplier has to understand the wishes and the fears (the imagination) of the customer, his behavior (the patterns) and the balance between the two parts. If the customer is out of balance the supplier has to help the customer to acquire new patterns (learning), help him to make a choice (advice) or show him the real opportunities (scenario’s) taking care of the customers identity.  

It is very difficult for a supplier to get accurate information. Most people are unable to make their behavioral patterns conscious. When people are asked about their opinion (an aspect of the emotions), they often do not want to offend the other and give proper answers. People only want to share their most secret wishes with people they trust (partner, family, friends). Correct information about the customer can only be acquired by carefully observing and analyzing the activities of the customer (what he is doing).  It is completely impossible for a company to observe the activities of all their customers. The only one who can do this is the customer himself.

Customers can observe their activities if they were able to gather personal activity-patterns, get the opportunity to analyze their behavior, and share their activity-patterns with others to get an advice. Most of the needed data is somewhere already available (patient records, buying behavior, payments etc) or can be made available by making connections to the tools the consumer is using in his personal- and work-environment (Emails, Content). The only thing that has to happen is that companies and government agencies make these patterns, which are most of the time privately owned by the customer, available.

It can be envisioned that all personal data is kept in a private space. Only the customer (the owner) can make the data available to others. This approach would prevent many problems in the current situation (e.g. spam).

The last step in a perfect rational demand oriented system is reached when the personal activity-patterns are automatically transformed in standardized need-messages that are sent out to appropriate providers.

Why Software Layers always create new Software Layers

Wednesday, March 26th, 2008

The IT-Industry has evolved in nearly 50 years. In that timeframe, it became the most influential business in the Industry. Everybody is completely dependent on the computer and its software.

The IT-Industry has gone through various technology waves. The waves generated integration problems that were solved by the construction of abstraction layers. The layers not only solved problems. They also created new problems that were solved by other layers. The effect of all intertwining layers is an almost incomprehensible, not manageable, software-complex.

The main reason behind this development is the architecture of the general-purpose computer. It was developed to control and not to collaborate.

Charles Babbage invented the first computer (the Difference Engine) in 1833. Babbage wanted to automate the calculation of mathematical tables. His engine consisted of four parts called the mill (the Central Processing Unit, the Operating System), the Store (the database), the Reader, and the Printer. The machine was steam-driven and run by one attendant. The Reader used punched cards.

Babbage invented a programming-language and a compiler to translate symbols into numbers. He worked together with the first programmer, Lady Lovelace who invented the term bug (a defect in a program). The project of Babbage stopped because nobody wanted to finance him anymore.

It was not until 1954 that a real (business-) market for computers began to emerge by the creation of the IBM 650. The machines of the early 1950s were not much more capable than Charles Babbage’s Analytical Engine of the 1830s.

Around 1964 IBM gave birth to the general-purpose computer, the mainframe, in its 360-architecture (360 means all-round). The 360/370-architecture is one of the most durable artifacts of the computer age. It was so successful that it almost created a monopoly for IBM. Just one company, Microsoft, has succeeded to beat IBM by creating the general-purpose computer for the consumer (the PC). Microsoft copied (parts of ) the OS/2-operating system of IBM.

The current technical infrastructure looks a lot like the old fashioned 360/370-architecture but the processors are now located on many places. This was made possible by the sharp increase in bandwith and the network-architecture of the Internet.

Programming a computer in machine code is very difficult. To hide the complexity a higher level of abstraction (a programming language) was created that shielded the complexity of the lower layer (the machine code). A compiler translated the program back to the machine code. Three languages (Fortran, Algol and COBOL) were constructed. They covered the major problem-area’s (Industry, Science and Banking) of that time.

When the problem-domains interfered, companies were confronted with integration problems. IBM tried to unify all the major programming-languages (COBOL, Algol and Fortran) by introducing a new standard language, PL1. This approach failed. Companies did not want to convert all their existing programs to the new standard and programmers got accustomed to a language. They did not want to lose the experience they had acquired.

Integration by standardizing on one language has been tried many times (Java, C-Sharp). It will always fail for the same reasons. All the efforts to unify produce the opposite effect, an enormous diversity of languages, a Tower of Bable.

To cope with this problem a new abstraction layer was invented. The processes and data-structures of a company were analyzed and stored in a repository (an abstraction of a database). The program-generator made it possible to generate programs in all the major languages.

It was not possible to re-engineer all the legacy-systems to this abstraction-level. To solve this problem a compensating integration-layer, Enterprise Architecture Integration, was designed.

The PC democratized IT. Millions of consumers bought their own PC and started to develop applications using the tools available. They were not capable to connect their PC’s to the mainframe and to acquire the data they needed out of the central databases of the company.

New integration layers (Client-Server Computing and Data-Warehouses) were added.

Employees connected their personal PC to the Internet and found out that they could communicate and share software with friends and colleagues all over the world. To prohibit the entrance of unwanted intruders, companies shielded their private environment by the implementation of firewalls. Employees were unable to connect their personal environment with their corporate environment.

A new integration problem, security, became visible and has to be solved.

It looks like every solution of an integration problem creates a new integration problem in the future.

The process of creating bridges to connect disconnect layers of software is going on and on. The big problem is that the bridges were not created out of a long time perspective. They were created bottom up, to solve an urgent problem.

IT-technology shows all the stages of a growing child. At this moment, companies have to manage and to connect many highly intermingled layers related to almost every step in the maturing process of the computer and its software.

Nobody understands the functionality of the whole and can predict the combined behavior of all the different parts. The effort to maintain and change a complex software-infrastructure is increasing exponentially.

The IT Industry has changed his tools and infrastructure so often that the software-developer had to become an inventor.

He is constantly exploring new technical possibilities not able to stabilize his craft. When a developer is used to a tool he does not want to replace it with another. Most developers do not get the time to gain experience in the new tools and technologies. They have to work in high priority projects. Often the skills that are needed to make use of the new developments are hired outside.

The effect is that the internal developers are focused on maintaining the installed base and get further behind. In the end, the only solution that is left is to outsource the IT-department creating communication problems.

After more than 40 years of software-development, the complexity of the current IT-environment has become overwhelming. The related management costs are beginning to consume any productivity gain that they may be achieving from new technologies.

It is almost impossible to use new technology because 70 to 90% of the IT budget is spent on keeping existing systems running. If new functionality is developed, only 30% of the projects are successful.

If the complexity to develop software is not reduced, it will take 200 million highly specialized workers to support the billion people and businesses that will be connected via the Internet.

In the manufacturing industry, the principles of generalization and specialization are visible. Collaboration makes it possible to create flexible standards and a general-purpose infrastructure to support the standards.

When the infrastructure is established, competition and specialization starts. Cars use a standardized essential infrastructure that makes it possible to use standardized components from different vendors.

Car vendors are not competing on the level of the essential infrastructure. The big problem is that IT-Industry is still fighting on the level of the essential infrastructure, blocking specialization.

To keep their market share the software has to stay in the abstraction framework (the general purpose architecture) they are selling and controlling.

A new collaborative IT-infrastructure is arising. The new infrastructure makes it possible to specialize and simplify programs (now called services). Specialized messages (comparable to the components in the car industry), transported over the Internet, connect the services. This approach makes it much easier to change the connections between the services.

The World Wide Web Consortium (W3C), founded in October 1994, is leading the development of this new collaborative infrastructure. W3C has a commitment to look after the interest of the community instead of business. The influence of W3C is remarkable. The big competitive IT-companies in the market were more or less forced to use the standards created by the consortium. They were unable to create their own interpretation because the standards are produced as open source software.

The basis of the new collaborative foundation is XML (eXtensible Markup Language). XML is a flexible way to create “self-describing data” and to share both the format (the syntax) and the data on the World Wide Web. XML describes the syntax of information.

XML has enabled a new general-purpose technology-concept, called Web-Services. The concept is comparable to the use of containers in intermodal shipping. A container enables the transport a diversity of goods (data, programs, content) from one point to another point. At the destination, the container can be opened. The receiver can rearrange the goods and send them to another place. He can also put the goods in his warehouse and add value by assembling a new product. When the product is ready it can be send with a container to other assembly lines or to retailers to sell the product to consumers.

Web-Services facilitate the flow of complex data-structures (services, data, content) through the Internet. Services, can rearrange data-structures, ad value by combining them with other data-structures and can send the result to other services.

All kinds of specialized data-structures are defined that are meant to let specialized services act on them.

An example is taxation (XML TC). XML TC (a part of the Oasis standards organization) focuses on the development of a common vocabulary that will allow participants to unambiguously identify the tax related information exchanged within a particular business context. The benefits envisioned will include dramatic reductions in development of jurisdictionally specific applications, interchange standards for software vendors, and tax agencies alike. In addition, tax-paying constituents will benefit from increased services from tax agencies. Service providers will benefit due to more flexible interchange formats and reduced development efforts. Lastly, CRM, payroll, financial and other system developers will enjoy reduced development costs and schedules when integrating their systems with tax reporting and compliance systems.

Web-Services are the next shockwave that is bringing the IT-community into a state of fear and attraction. Their promise is lower development cost, and a much simpler architecture. Their threat is that the competition will make a better use of all the new possibilities.

The same pattern emerges. Their installed base of software slows most of the companies down. They will react by first creating an isolated software-environment and will have big problems in the future to connect the old part with the new part.

Web-Services will generate a worldwide marketplace for services. They are now a threat to all the current vendors of big software-packages. In essence, they have to rewrite all their legacy-software and make a split in generic components (most of them will be available for free) and essential services users really want to pay for.

Big software-vendors will transform themselves into specialized market places (service-portals) where users can find and make use of high quality services. Other vendors will create advanced routing-centers where messages will be translated and send to the appropriate processor.

It will be difficult for small service-providers to get the attention and the trust of companies and consumers to make use of their services. They will join in collaborative networks that are able to promote and secure their business (The Open Source Movement). It is impossible to see if they will survive in the still competitive environment where big giants still have an enormous power to influence and a lot of money to create new services.

If the big giants succeed, history will repeat itself. The new emerging software-ecology will slowly lose its diversity.

Web-services are an example of the principles of mass-customization and customer innovation. All the software-vendors are restructuring their big chunks of software into components that can be assembled to create a system.

Small competitors and even customers will also create components. In due time the number of possible combinations of components that are able to create the same functionality will surpass the complexity a human (or a collective of human beings) can handle.

LINKS

About the Human Measure

How the Programmer stopped the Dialogue

How to Destroy Your Company by Implementing Packages

About Smart Computing

About Software Quality

About Meta-Models

About Software Maintenance

About Model Driven Software Development

About Programming Conversations and Conversations about Programming

About Mash-Ups

Thursday, December 20th, 2007
Another new hype-term is the Mash-up. A Mash-up is a new service, that combines functionality or content from existing sources.
 
In the “old’ days of programming we called a Mash-up a Program (now Service) and the parts of the Program Modules. Modules were reused by other Programs. We developed and acquired libraries that contained many useful modules.
 
They did not document the software and used many features of the operating system that interfered with other programs. The very old software programs created the Software Legacy Problem.
 
Another interesting issue that has to be resolved is Security. Mash-ups are a heaven for hackers and other very clever criminals.

When I look at the Mash-up I really don’t know how “they?” will solve all these Issues.

When everybody is allowed to program and connect everything with everything a Mash-up will certainly turn into a Mess-up. Many years from now a new Software Lecacy Problem will become visible.

There is one simple way to solve this problem. Somebody in the Internet Community has to take care of this. It has to be an “Independent Librarian” that controls the libraries and issues a Quality Stamp to the software (and the content) that is free to reuse. I don’t think anybody will do this.

Personally I think the Mashup is a very intelligent trick of big companies like Microsoft, Google and Yahoo to take over the control in software development. In the end they will control all the libraries and everybody has to connect to them. Perhaps we even have to pay to use them or (worse) link to the advertisement they certainly will sell.

To stabilize the software development environment we had to introduce many Management Systems like Testing and Configuration Management to take care of Software Quality.

The difference with today is that the software libraries are not internal libraries. They are situated at the Internet.

It took a very long time to stabilize the software development environment. In the very old days programmers were just “programming along”.

How to Make Sure that Everybody Believes what We are Believing: About Web 3.0

Thursday, December 20th, 2007

This morning I discovered a new term Web 3.0. According to the experts Web 2.0 is about “connecting people” and Web 3.0 is about “connecting systems”.

Web 1.0 is the “good old Internet”. The “good old Internet” was created by the US Army to prevent that people and systems would be disconnected in a state of War with the Russians. Later Tim Berners Lee and the W3C added a new feature “hypertext”, connecting documents by reference.

As you see everybody is all the time talking about connecting something to something. In the first phase we connect “systems”. Later we connect “people”. Now we want to connect "systems" again. We are repeating the goal but for some reason we never reach the goal.

In every stage of the development of our IT-technology we are connecting people, software (dynamics) and documents (statics) and reuse the same solutions all over again.

Could it be that the reused solutions are not the “real” solutions? Do we have to look elsewhere? Perhaps we don’t want to look elsewhere because we love to repeat the same failures all over again and again. If everything is perfect we just don’t know what to do!

There is an article about Web 3.0 in Wikipedia.

Two subjects are shaping Web 3.0. The first is the Semantic Web and the other is the use of Artificial Intelligence, Data- & Text Mining to detect interesting Patterns.

The Semantic Web wants to make it possible to Reason with Data on the Internet. It uses Logic to do this. The Semantic Web wants to standardize Meaning.

The Semantic Web uses an “old fashioned” paradigm about Reasoning and Language. It supposes that human language is context-independent. They have to suppose that Human Language is context-independent because if they don’t believe this they are unable to define a Computer Language (OWL) at all.

It is widely acccepted that the interpretation of Language is dependent of a Situation, a Culture and the Genesis of the Human itself. Every Human is a Unique Creation. A Human is certainly not a Robot.

The effect of a wide spread implementation of the Semantic Web will lead to a World Wide Standardization of Meaning based on the English Language. The Western Way of Thinking will finally become fixed and dominant.

The Semantic Web will increase the use of the Conduit Metaphor. The Conduit Metaphor has infected the English Language on a large scale. The Conduit Metaphor supposes that Humans are disconnected Objects. The disconnected Sender (an Object) is throwing Meaning (Fixed Structures, Objects) at the disconnected Receiver (An Object).

The Conduit Metaphor blocks the development of shared meaning (Dialogue) and Innovation (Flow). The strange effect of Web 3.0 will be a further disconnection. I think you understand now why we have to start all over again and again to connect people, software and content.

Too Many Insights about IT-Architectures and IT-Strategy

Monday, November 19th, 2007

I have been responsible for IT-Architectures and IT-Strategy between 1984 and 1997. From 1997 until now I have reviewed many Architectures and Strategies when I was part of Meta Group (now Gartner).

An IT-Architecture is a System that describes the components of a Software System on an Abstract Level.

An IT-Strategy is a Process that contains Stages. In every Stage a new version of the IT-Architecture is implemented.

A Well-Formed IT-Strategy is able to Adapt the Old Version of the Architecture. When you’re Strategy has failed You have to Start All over Again.

There are two types of Systems. The first type contains Systems. The second type I call a Foundation. It is the level where we think “the Real Things are Happening”. The major problem is to define the Level of the Foundation.

If you look at your Own computer the Foundation lies deeper than you think. It depends on “What You Understand About a Computer“.

We could define the Foundation as the Operating System of Your Computer (most likely a Microsoft Operating System) but below this Foundation other Foundations are in Existence.

At a more abstract level you can see the real Problem. The problem is Containing.

If you use the Containing Metaphor you Never Stop but You Have to Stop Somewhere.

The level where you Stop is the level where you give the responsibility to An-Other. This can be an Organization, A Person or when we dig deep enough even Nature.

When you give the responsibility for something to an-other you have to Trust the Other and the Other has to take Care of You.

The reason why Architectures fail is that they are based on a Foundation that is not stable on the long term.

Suddenly somebody starts to tinker with the Foundation and suddenly everything goes wrong. This happens all the time. The others leave you alone and you have to take care of yourself.

A solution was to create a Foundation that was able to withstand every Change at the Lower level.

This layer was called Middleware. It is situated somewhere between the UP and the DOWN of all the Layers. History has proven that this solution is not helpful.

Everything changes all the time.

I want to give you a Model to understand the complexity of the problem. I use a Horizontal and a Vertical layer, A Matrix. Layered architectures can be mapped on a Matrix, a Cube or a higher dimensional Structure, N-Dimensional Space.

Every component can be described by a point that is connected to N-variables. The links between the components are lines connecting the points.

The first thing we could do is use one dimension to describe the “type” of the component (software, hardware, database, form, etc). If we create a picture of this dimension we have created a System Diagram. There are many types of System Diagrams invented. They were called a Method or a Modeling Language. Every new Method created a “Method War” because the Users always thought that their Method was the Best.

I participated in many activities of the IFIP (International Federation for Information Processing). We tried to find a way to find the “Best Method” to Improve the Practice. It was proven that there was no best method.

Many roads lead to Rome. See “Information Systems Design Methodologies: Improving The Practice“, T.W. Olle, H.G. Sol and A.A. Verrijn-Stuart, (Eds.), North-Holland.

At the end the major Method Wars ended with a Compromise, UML. Compromises are always the worst solution for a problem. UML is a very complicated method.

If we start the Diagram with an Event and we have chosen the right Modeling Language, now called a Programming Language, we are able to “simulate” the System or to “generate” the software. There are many Tools, Modelers, Simulators, Languages and Generators developed.

They also created wars (called Competition) between all kinds of Vendors. In the end many Tools were taken out of the market by the Vendors and the Users got stuck. They were unable to convert the old tools to the new tools or they simply did not take the time to do this. This problem is called the Legacy Problem.

The Market invented something to solve this problem called Reverse Engineering. Reverse Engineering proved to be a failure because the semantics, the meaning, of the software was gone.

When you deconstruct a car and you show an engineer all the parts he knows the parts belonged to a car. When you do this with something nobody ever knew it existed the only engineer that is capable to reconstruct the original is the engineer who constructed it.

When the software is old the original programmer is gone and nobody is able to understand what he was doing. Sometimes the software contains documentation. The programmer has written a Story about the Meaning of the Software. Programmers never took (and take) the time to do this.

I want to get back to N-Dimensional Space. I hope you understand that we when we use enough dimensions we are able to Model Everything.

We are also able to MAP Everything to Everything. Mapping or Converting could solve many problems.

There were Systems on the market that helped you to MAP one Structure to another Structure. An example was Rochade. I implemented Rochade when I was responsible for IT-Architectures and I know Rochade solved many problems with Legacy Systems.

Rochade used something called a Scanner or Parser. A Parser is a “piece of software” that translates a “piece of software” into another “piece of software”. It stored the data of the software (the meta-data) in a “general” format that could be translated to other formats.

When you program in a Software Language the code is translated to another Language. This happens many times until the software reaches the Lowest Level, The Processor or the CPU.

The CPU uses a Cycle to process a very simple language that consists of binary numbers. These numbers are real numbers or operations. The simplest operations are operations on a Set.

The whole concept of the CPU was invented by John von Neumann and is therefore named the Von Neumann Architecture.

The architecture of von Neumann has a big disadvantage called the Von Neumann bottleneck. The CPU is continuously forced to wait.

The Von Neumann Computer is Wasting Time and Energy.

An alternative is the Parallel Architecture. Parallel computing has recently become the dominant paradigm in computer architectures. The main reason is the Rise of the Internet.

The Rise of the Internet started the Fall of Centralized IT-Architectures and Centralized IT-Strategy.

At this moment we need another approach to Manage or Control the sofware-production of a big company.

This approach can be found in the Open Source Movement.

If we use the Matrix Approach we can answer interesting questions.

First I introduce a Rule.

When we increase the amount of Dimensions we are able to make every point and connection between a point Unique. If we do the opposite we are able to make every point and connection The Same.

When people talk about the Reuse of a Component we are looking for a Dimension were some points and their connections are the Same.

I hope you see that it is possible to Reuse Everything and to Reuse Nothing. The Choice is Yours.

This also the Practice in Software. When I discovered that the Year-2000 problem could lead to a disaster I started a research-project with the CWI. The CWI develop a very intelligent parser that could create Software-Maps.

When we studied the maps we saw that some pieces of software came back all the time. These were “citations”. One programmer invented a new construct and others reused the construct all the time. The major difference with the Theory of Reuse was that MANY Parts were the Same.

When you dig deep enough you always find “The Same”.

The CPU is handling binary codes and when you would come from another planet you would not understand why all these zero’s and 1′s are creating a DIVERSITY. They create a DIVERSITY because Something is Interpreting the Sequence. This Something is also a Program. This program uses a Theory about Languages. Most of the time it supposes a Context Free Language. A Context Free Language is a language where the interpretor always moves in one direction. It processes a List.

The Diversity a Computer Produces is based on one long List of Binary patterns. If we could analyze all the possible patterns we could find all the possible software-programs that could be build until Eternity. Because the binary codes can be mapped to the Natural Numbers. We need only one dimension (A line) to classify all the possible software-components in the World.

In 1931 Gödel’s stated the so called incompleteness theorems. He uses the Natural Numbers to prove that a part of our Human Reality cannot be described by a Computer Program.

There is something “left for us” that the Machines cannot take over. This part is related to the Emotions and the Imagination. We cannot Automate them. If we do this we stop Innovation and Commitment.

Now I want to come back to IT-Architectures.

When You start an IT-Architecture Out of Nothing you start with a Small Amount of Dimensions. The world looks very simple to you. When you add detail you have to increase the amount of dimensions. The effect of this is that Everything Changes. New Possibilities arise. If you go on using the Top-Down Approach you will move into a State of Huge Complexity. Always start in the Middle!

At a certain moment You have to move to Reality.This means Programming Software. At that moment You encounter something you never thought of. The Software Legacy!!!!!!!!!!!!

When you increase the Scope of your System and you leave the Boundaries of Your Company (the World of the Internet) the Complexity increases also.At that moment You encounter something you never thought of, Open Source. Millions of Possibilities arise and You don’t Know what to do!

Behind the Software are People. Some of them are creating Small Companies and they are doing things you’re company is also doing but they do it much cheaper and faster.

What to do?

If You Can’t Beat them Join Them.

Why Good programmers have to be Good Listeners

Friday, June 29th, 2007

Edsger Wybe Dijkstra (1930-2000) was a Dutch Computer Scientist. He received the 1972 Turing Award for fundamental contributions in the area of programming languages.

One of the famous statements of Dijkstra is “Besides a mathematical inclination, an exceptionally good mastery of one’s native tongue is the most vital asset of a competent programmer“.

Why is this so important?

People communicate externally and internally (!) in their native tongue. If they use another language much of the nuances of the communication is lost. When people of different languages communicate they have to translate the communication to their internal language.

A computer language is also a language. It is a language where every nuance is gone. With the term nuance (I am a Dutch native speaker) I mean something that also could be translated into the word meaning. A computer language is formal and human communication is informal. We communicate much more than we are aware of when we speak.

So Programming is a Transformation of the Human Domain of Meaning to the Machine-Domain of Structure.

A programmer with a mathematical inclination (being analytical) AND an exceptional good mastery of one’s native language is the only one who can built a bridge between the two worlds.

When he (or she, woman are better in this!!!) is doing this he knows he is throwing away a lot of value but it is the consequence of IT. Machines are not humans (People that are Mad act like Machines).

Machines are very good in repetition. Humans don’t like repetition so Machines and Humans are able to create a very useful complementary relationship.

The person that understood this very well was Sjir Nijssen. He developed with many others something called NIAM. NIAM has generated many dialects called ORM, FORM, RIDDLE, FCO-IM, DEMO. The basic idea of all these methods is to analyze human communication in terms of the sentences we speak. It takes out of a sentence the verbs and the nouns (and of course the numbers) and creates a semantic model of the so called Universe of Discourse.

What Nijssen understood was that a computer is able to register FACTS (reality we don’t argue about anymore) and that facts are stored in a database. If we all agree about the facts we can use the facts to start reasoning. Want to know more about  reasoning. Have a look at this website.

To create a program that supports the user a good programmer has to be a good listener and a highly skilled observer. Users are mostly not aware of their Universe of Discourse. They are immersed in their environment (their CONTEXT). Many techniques have been developed to help the observer to make it possible to recreate the context without killing the context (Bahktin). Have a look at User-Centered-Design to find out more about this subject.

Want to read more about Dijkstra read The Lost Construct.

The Lost Construct in IT: The Self-Referencing Loop

Thursday, June 28th, 2007

Edsger Wybe Dijkstra (1930-2000) was a Dutch computer scientist. He received the 1972 Turing Award for fundamental contributions in the area of programming languages.

He was known for his low opinion of the GOTO-statement in computer programming culminating in the 1968 article “A Case against the GOTO Statement” (EWD215), regarded as a major step towards the widespread deprecation of the GOTO statement and its effective replacement by structured control constructs such as the DO-WHILE-LOOP. This methodology was also called Structured Programming.

(more…)

Why Mobile Communication is Generating Stress

Tuesday, June 19th, 2007

Today I searched the Internet and by coincidence (does it exist?) I found a small document about the dangers of Wifi. It is dated the 28th of April 2007. The most interesting part of this document is a poll. 40% of the voters are not interested in the answer if it is dangerous or not.

They don’t mind because they are addicted to mobile communication. People that are addicted to smoking would give the same answer. I don’t mind if I get cancer. I take the risk. Mobile phones are also creating cancer (and other things I come back to that later) but now I am getting very confused.

We are forbidding smoking because of this and we are not forbidding mobile communication.

What is happening?

The answer is simple. Cigarettes are not produced by government owned companies and mobile communication is. Government has invested billions in infrastructure and many companies are totally dependent on mobile communication. Millions of software programs use it. If we would stop mobile communication the country would come to a still stand.

Ok. So it is a strategic issue. We have to take the risk.

Yes but now I am getting more confused.

Cars are also a strategic issue but here we are advising and training people. So we could train users what to do to prevent the greatest risk. We could tell them not to put the telephone in their pocket or not to keep it against your ear or we could forbid mobile telephones for children or we could decide not to put the transmitters on houses where older people are living or …..

What is happening?

There are a few possibilities.

1. The risk is very low. Research is not telling this.

2. The risk is very high. This could be the case. The risk is so high that they want to prevent that everybody gets into panic. I don’t think so. The risk is high in the long term and not in the short term. To state it simple mobile networks create a high level of stress.

3. They don’t understand the risk. I am convinced this is the case. Scientists believe the human body is acting like a machine that is made out of parts. They don’t know what to do with a field. Parts are related to causal and short term thinking. Fields are related to wholeness and we cannot find one cause that explains it all. Stress is a field, a state, an effect. There are many causes and we cannot handle this with our Western Brain.

Now it is time to explain something.

I use material from a document called “The Holographic Universe, by RICHARD ALAN MILLER, BURT WEBB, and DARDEN DICKSON, Experimental College University of Washington.

A few citations:

The formation of a certain type of chemical bond known as the resonance bond (which is most easily seen in the case of the Benzene molecule) leads to a peculiar situation in which certain electrons are freed from a local or particular location in the molecule. These are then free to travel around the entire molecule.

The essential fluidity of life may correspond with the fluidity of the electronic cloud in conjugated molecules. Such systems may best be considered as both the cradle and the main backbone of life.

The biological activity or specificity of action of various molecules is intimately related to their structure or their exact three-dimensional spatial configuration.

A constant magnetic field can, in principle, affect the various processes in biological objects.

Such electromagnetic fields normally serve as conveyors of information, from the environment to the organism, within the organism, and among organisms.

Electromagnetic forces can be used to change three fundamental life processes in mammals. These processes are (1) the stimulation of bone growth (2) the stimulation of partial multi-tissue regenerative growth and (3) the influence on the basic level of nerve activity and function. All these affects appear to be mediated through perturbations in naturally pre-existing bioelectronic systems. The organism’s bioelectronic system also seems to be related to levels of consciousness and to biological cycles.

Research carried out with organisms in fields lower or higher than the normal magnetic field strength of the earth inevitably results in deterioration and death of the organisms involved.

Consciousness may be seen as a frame of electrical charges in motion such as electrons bombarding a television screen; personality is a time series of these scintillating frames of consciousness. Personality becomes a reverberating input-output pattern of self creation seeking information or patterns of energy from the environment as well as from its own memories. The personality never recreates itself but creates only a close approximation which is accepted due to the principle of constancy as being the same.

Human beings are better seen as on-going, dynamic, shifting, changing, field entities (or field patterns).

We feel that many of the problems of society that are current today can be traced to our ignorance of, or refusal to embrace, this larger holographic electrodynamic reality in which we live“.

And last but not least:

Weather systems also have electrical and magnetic correlates. One can see a very positive contact or connection between electromagnetic phenomena associated with weather and the behaviour and health of organisms. A more advanced theory would connect weather changes and changes in the physical environment to behaviour and biological products attributable to organisms. More precisely stated not only does weather in a variety of ways profoundly influence living creatures, but also it is possible that living creatures can influence weather“.

What is happening?

We are creating a highly stressfull situation and it looks like the collective stress is generating stressy weather.

But it is worse.

The Russians and the US (and Chinese?) military are playing with the knowlegde of the Holographic Universe for 35 years. They are playing with electro magnetic fields. The article is written in 1972 and Transcendental warfare, the use of Electro-magnetic fields and Para-Psychology in the battlefield, is about using this knowledge. The people who are doing this research know how it works but they don’t want to reveal this because War is always related to Secrecy.

We are also in a process where the sun is moving closer to the galactic centre. This is creating an exponential growth of electro magnetic radiation. Not only the earth is warming but also the moon and the other planets. On the Website of NASA you can find all the facts.

The movement to the Centre was predicted by The Mayas, The Essenes, The Sufi, The Hopi’s, The Navajo’s and every other Spiritual Leader for a long time ago.

Citation:” Furthermore, this knowledge is not new. It is the main core of the message of the Spiritual Leaders throughout history. It is also discussed, in other terms, by many individuals who characteristically experience psycho energetic phenomena (e.g., psycho kinesis, clairvoyance, telepathy, precognition).

What can we do?

The people that experience “psycho energetic phenomena” know that the Light of Love can save us and The Light will help us if we allow it to help us.

The Spiritual Leaders framed this in a very simple message: Know Yourself and Give Love to the Other.

The message for the Part-thinkers is: Meditate, Relax, Take your Time, Enjoy your Lover, Your Children, Life and Nature.

Turn off your mobile phone.

Use Email.

Create enough time for yourselve and others.

Stop having long useless meetings.

Don’t make too much appointments.

Plan two days of free time.

Plan and act according to your plan.

Take a the risk and be spontaneous.

Do what you promise to do.

Evaluate if all the things you want to accomplish are really needed.

Don’t strive for perfection (the 80/20 rule).

Are we already in a very high state of luxury? Do we need more?

Is it already available somewhere (Copy and Think).

Can we learn from others?

Can we help others? There are many lonely old people that love to have a talk or want to spend some time in Nature. Imagine you are old? What do you want to happen to you? It is a very simple exercise and it is the same message as the Spiritual Leaders are giving only framed in another language.

You can also use the Golden Rule of Ethics of Emmanuel Kant (a well known philosopher) “treat others as you would like to be treated.”

Or to put it in the terms of the Field: Keep in Tune with your Environment.

Do you want to know more about this subject read How to Prevent a World Wide Disaster by Creating a Collective Infrastructure

Do you want to know more about Transcendental Warfare read Be Honest to Yourselve and Others