Posts Tagged ‘meta model’

About Meta Models

Tuesday, January 13th, 2009

A model of a model is called a meta-model. Meta-models are made to increase the efficiency of the software-development process. You can go on with the creation of models-of-models (a meta-meta-model).

If there is nothing to stop you to model the model there is something wrong with the modeling approach.

When you are modeling you are compressing data. When you compress data the amount of data is reduced. A compression of a compression amounts to less data so there must be an end to the process of meta-modeling.

The counterpart of Compression is Expansion. When you have used a valid meta-modeling approach the meta-model has to expand to the original model without losing data.

When you want to make a meta-model you can model a static (a database) or a dynamic model (a process). Most of the meta-models of dynamic models are static models. They are stored in a database (dictionary, repository). The process of meta-modeling has the tendency to freeze the dynamics of a dynamic model.

A database is a combination of a storage-system and a software-program that stores and retrieves the data at the right place. A storage-system is also a software-program connected to a rotating device, a disk-drive, which is also managed by a software-program.

In reality every thing moves. Meta-Modeling splits a Dynamic System in a Database (a Datamodel) and Software (a Process) to Apply the Meta-Model. The Software is used to Expand the Compression of the Dynamic Model.

If you want to evaluate the efficiency of a meta-modeling approach you have look at the efficiency of the compression and the expansion of the data and the process.

When everything is a process, a method is a dynamic meta-model of a process. The meta-model of a method, a meta-meta-process, is a summary of the method. If you are not able to use the summary something is wrong with the method. If the summary is sufficient the real method is too complex.

The only way to create an efficient and reliable meta-model of a process is to find the self-similarity of the process. A self-reference, a fractal, of a process always contains less data and it is possible to enfold the fractal to a lower level without losing data.

If you use the fractal approach meta-meta-meta-….modeling is not needed because a fractal meta-model contains two parts, the fractal and the program to expand or compress the fractal to a lower or higher level.

Fractal compression and expansion is very successful in the area of image processing and textual summaries.

Is it possible to find the meta-model fractal?

The big problem is language. We express our reality in language and our current language is heavily distorted. It contains many interrelated overlapping layers.

If we use the language that is produced to describe processes we will perhaps be able to summarize, find the essence of the text, but we are never sure the text contains the real processes.

The solution to this problem is to observe processes and make a picture. If we Observe processes, look with the Eyes, we are producing Images. Fractal compression is able to compress these images.

How about the Ancient Scientists?

The ancient scientists were aware of the divine fractal. In my blog “About the Whole and the Parts” I use the Ternary Numbers or the Trinity to define a Meta-Model. The model starts with the Dynamic Whole and is expanded until it has replicated itself.

Interesting enough the theory behind fractal compression uses iterated functions based on the so called Sierpinski Gasket to detect fractals. The Sierpinski Gasket is an expansion of Triangles in Triangles. It is used to simulate DNA, a Biological Meta-Model and other “natural” structures

What happened?

The first expansion: Ø → 0. The Void, the Infinite Potential, transformed into the Nothing.

The second expansion: 0 →(-1,0,1). The nothing expanded in a negative and a positive part. The sum of the expansion is still zero. This is the principle of voiding. Every part that is created needs a counter-part that is it’s opposite (Part ∩ Counterpart = Ø). Every Expansion is compensated with the same Compression. When we divide a Whole we always have to make a “clear” cut (no overlap).

The third expansion: (-1,0,1) → (-2,-1,0),(0), (0,1,2). This is the creation of the Four Forces with the Void, the Zero (now the Fifth, Quintessence), in the Middle. Two of the Four Forces are the Same Forces that were formed in the Second Expansion. They could be called Expanding Expansion (Desire) and Compressing Compression (Control). The other two combinations are Expanding Compression and Compressing Expansion. Most of the time they are called Spirit (Creating) and Soul (Connecting). In the third expansion the Divine Fractal has expanded in Itself. That’s why we, the Humans, are created in the Image of God, the Creator.

Is it possible to transform the “divine” metamodel into a “normal” metamodel?

The whole problem comes down to language again. Are we able to translate the numbers into the Right Words? Let’s have a try.

0 or 5 are mostly called consciousness or the observer. In the terminology of IT 0 or 5 could be called a Monitor. The Monitor has to take care that every part has a counter-part.

1 and -1 are called Control and Desire. In terms of IT they could be called Rules and Sensors. The Sensors and the Rules are opposites. The Facts of the Sensors always fight the Structures (Methods, Systems, Programs) of the Rule-System. The conflict between Facts and Rules (Testing the Model) is the basic conflict behind every Scientific Approach.

-2 and +2 are called the Creator, the Imagination (Ideas) and the Emotions (Social Relationships). They represent the Possibility (in terms of new combinations of the existing Parts) to Enfold the model to a New level and the Role of the Human, the Actor, in the Game (defined by the Rules) that the Controller is playing with the Sensors. The Creator and the Emotions are also opposites. The Creator Splits and the Emotions Merge.

What is the fractal?

The Fractal is a Spiraling Spiral that moves Three Cycles Up and Three Cycles Down and rests in the Middle (the Seventh Day).

In the Cycle the Controller, the Sensors, Spirit and Soul are connected in Twelve possible ways.

Sometimes Spirit & Soul and Control & Desire void each other. It this case the Spiral moves back to the Void.

When Soul, Spirit and Desire (The Mother), the Female Trinity, are connected the Spiral Expands.

When Soul, Spirit and Control (The Father), the Male Trinity are connected the Spiral Compresses.

Spirit moves the Spiral Up and starts a new Level of Awareness.

Soul moves the Spiral Down to an existing Level of Communion.

The Divine Rule, the Golden Mean, the principle of Harmony, controls the Trinities of the Golden Spiral, to make sure that the spiraling spiral always voids itself in the end and returns to the Beginning.

LINKS

A website about Fractal Compression of Images

About the Divine Rule

About Model Driven Software Development

Saturday, August 23rd, 2008

In the beginning of Software Development Programmers just Programmed. They did not use any method. The program was punched on a stack of cards and the computer executed the code. It took many days to get a small program running.

In the early 1980s text editors were introduced. In this stage somebody else called an Analyst wrote down Specifications and the Programmer transformed the specifications into a Program.

The Programmers and Analysts had to fill in forms with a pencil. The forms were typed by a central department and returned to them for correction. Much later programmers and analysts were able to use their own text editor.

The Specifications and the Programs were represented by many competing diagramming techniques like DFD (Data Flow Diagrams), JSP (Jackson), ERD (Entity Relationship Diagrams, Bachman), NIAM, Yourdon, Nassi Schneidermann and ISAC (Langefors). The Programmers and Analysts used Pencils and Plastic Frames to draw the Diagrams.

The data about the programs and the databases were stored in a Dictionary. A Dictionary is a System to store and retrieve Relationships. The Dictionary Software generated Copybooks that were included into the (COBOL) Programs. One of the most important Dictionary Packages was called Datamanager.

Datamanager used a so called Inverted File Database Management System. The Inverted File or Inverted Index is optimized to store and find Relationships.

At that time there were many types of Database Management Systems (Hierarchical, Network, Relational and Object). They were optimized for a special type of storing and retrieving data.

Between 1980 and 1990 the competing Methods and Diagram Techniques were fused and expanded to cover many domains of IT. The Dictionary (Datamanager) was also expanded to contain many more Relationships.

Around 1990 the process of integration was finally accomplished. At that time Information Engineering (IE) of James Martin was the most comprehensive Methodology available on the Market.

Texas Instruments implemented IE on a mainframe computer and called it IEF. IE was also implemented in IEW (Knowlegdeware) and Excellerator (Index Technologies). Computer Assisted Software Engineering (CASE) was born.

You have to understand that Graphic User Interfaces and PC’s were at that time in their infancy. It was impossible to manipulate diagrams. We used mainframes and Dumb User Interfaces (Forms) to define the models but we got a long way with it.

The big innovation came when IBM announced AD/Cycle in 1990. They created an Alliance with Bachman Information Systems, Index Technology Corporation, and Knowledgeware to create the most advanced Model Driven Software Development Tool ever made.

The kernel of AD/Cycle would be a complete new Repository based on the Relation DBMS of IBM called DB2.

At that time ABN AMRO was in a merger and we had the idea that an alliance with IBM would help us to create a new innovative development environment. I was involved in everything IBM was doing in its labs to create AD/Cycle.

The project failed for one simple reason. The Repository of IBM was never finished. The main reason was the Complexity of the Meta-Model of the Repository. A Relational DBMS is simply not the way to implement a Datadictionary (now called a Repository).

Another reason the project failed was the rise of Object Oriented Programming and of course the huge interference of Microsoft.

To save the project we had to find another repository and used the original Repository of Knowledgeware called Rochade. Rochade is still on the market. It is still a very powerful tool.

The introduction of the PC and the Activities of Microsoft generated a disaster in the development process of software. We had to move to square one and start all over again.

The Destructive Activities of Microsoft began by selling isolated disconnected PC’s to Consumers (Employees are also Consumers!).

At that time we did not realize this would cause a major drawback. We even supported them by giving all the employees of the Bank a PC, to Play With.

What we did not know was that the Employees started to Develop software on their own to beat the backlog of the central development organization. Suddenly many illegal (Basic) programs and databases appeared and we had to find a way to avoid Total Chaos.

The Solution to this problem was to introduce End User Programming Tools (4GL’s) like AS and Focus.

To provide the End Users with Corporate Data we had to develop Datawarehouses.

We were forced to create different IT Environments to shield the Primary, Accountable, Data of the Bank.

We had to develop a New Theory and Approach to support a completely new field of IT now called Business Intelligence.

We had to find a way to solve the battlefield of IBM (OS/2) and Microsoft (Windows) on the level of the PC Operating System.

We had to find a way to connect the PC to the other Computer Systems now called Servers. The concept of Client/Server was developed.

We had to find a way to distribute the Right Data on the Right Computer.

What Happened?

We were Distracted for about TWENTY YEARS and all what we where doing is Reacting on Technological Innovations that were Immature. We did not know this at that time.

The Big Innovation did not happen on the Level of the Method but on the Level of the Infrastructure. The Infrastructure moved from the Expert Level to the Corporate Level to the Consumer Level and finally to World Level. At this moment the MainFrame is back but the Mainframe is distributed over many Computers connected by a Broadband Network. We are finally back at the Beginning. The Infrastructure shows itself as a Cloud.

In every phase of the Expansion of the Infrastructure new Programming Languages were constructed to support the transformation from One level to the Other level. Every Time the Model had to be Mapped to another Target System.

The IBM Repository failed because the Meta Model of the Repository was much to complex. The Complexity of the Model was not caused by the Logical Part (The Technology Independent Layer) but by the Technical Part of the Model. It was simply impossible to Map the What on the How.

The only way to solve this problem is to make the What and How the Same.

This is what happened with Object Oriented Programming (OO). Object-Oriented programming may be seen as a collection of Cooperating Objects. Each object is capable of receiving messages, processing data, and sending messages to other objects. Each object can be viewed as an independent little machine with a distinct role or responsibility.

The rise of OO started in the early 1990s. At this moment it is the major programming paradigm. OO fits very well with the major paradigm about our Reality. That is why it can be used to Design (What) and to Program (How). OO comes with its own Method called UML.

What is Wrong with OO?

The first and most important problem is the problem of the Different Perspectives. Although OO fits with the Western Model of Reality, We (the Humans) perceive Reality in our own Way. Every Designer experiences another Reality and it is almost impossible to Unite all the Perspectives.

To Solve this Problem we All have to Agree on a Shared Model of Reality. This is mainly accomplished by defining Standard Models of Reality. The problem with Standard Models of Reality is that they are EnForcing a Certain Point of View.

Enforcing one Point of View to many People generates Opposition and Opposition starts a Process of Adaptation. The Major Effect is a very Complex Implementation of an Inconsistent Model of Reality. The What and the How are not the Same anymore.

OO is creating the Problem they want to Solve.

What to Do?

The long process of integration of the Methods until the 1990′s showed that there is one major issue that has to be resolved when you want to create software.

This Issue is called Terminology. Its main issue is to Define What We are Talking About. If we don’t agree about what we are talking about (The Universe of Discourse) we will always be talking about what we are talking about. We will create Circular Dialogues.

Eugen Wüster was the creator of the Science of Terminology. His activities were taken over by Unesco. It founded a special Institute to coordinate Terminology in the World called Infoterm.

There are four distinct views on Terminology:

  • the Psychological View

Concepts are Human Observations. They have to be based on Facts.

  • the Linguistic view

Concepts are the meanings of general terms. They have to be Defined.

Concepts are Units of Knowledge. They have to True.

Concepts are abstractions of kinds, attributes or properties of general invariant patterns on the side of entities in the world. They have to be Related.

Sadly, elements of all four views are found mixed up together in almost all terminology-focused work in Informatics today.

We are Confusing even the Science to avoid Confusion.

LINKS

About the History of Terms

About CASE-Tools

About the History of Terminology