Posts Tagged ‘software’

About the Next Steps of the Future Center Smart Systems and the Leiden Center of Data Science

Sunday, May 4th, 2014

At 28-4-2014 the Future Center Smart Systems (FCSS) was launched At the same moment the Leiden Center of Data Science (LCDS) was announced. This is not a coincidence.

FCSS and LCDS are two parts of the same coin. LCDS is focused on the Science of Data and FCSS is focused on using the results of the Science of Data to build Smart Systems or to give the Science of Data interesting problems to solve. FCSS is Practice and LCDS is theory.

When we look at Wikipedia we see that the Data Sciences are defined as “The study of the generalizable extraction of knowledge from data”. Knowledge shows itself in this case as Models, Patterns and Laws (Rules).

The Data Sciences detect patterns based on facts and are an abstraction of the Scientific Process itself.  The Data Sciences are therefore also called E-Science. 

E-Science is a combination of:  Signal processing, Mathematics, Probability models, Machine learning, Statistical learning, Computer programming, Data engineering, Pattern recognition and learning, Visualization, Uncertainty modeling, Data warehousing, and High performance computing.

Because E-Science is an abstraction of Science it can be applied to every science you can think of. In the case of the LCDS the current sciences that are involved are Physics, Astronomy, Bio-Science (Leiden Bioscience Park), Life-Sciences (Dutch Techcenter for Life Sciences, DTL), Medical Sciences (LUMC), Law (Leiden Law School), Aviation (NLR), Mathematics and Informatics (LIACS).

The FCSS is not only about E-Science it is also about using the patterns that are discovered to control processes and influence humans. In this case we are interested in the developments in Value, Case & Process-managers,  Domain Models and Sensor-Technology. Theory is static and practice is dynamic. Theory becomes practice by doing.

In the long view we expect that the combination of Senses & Actions & Thinking Machines will lead to Autonomous Systems that will help and support Humans. These systems are sometimes called Robots because we believe Autonomous Systems look like us but an autonomous system like our Universe does not have to move itself.

The next step of the Future Center is to connect with practice, start with technology transfer and find out what the need is of the market.

The Future Center is already involved in some projects and we wants to start some more. If you are interested to participate in one of the projects or want to start a new projects send an email to hans.konstapel@gmail.com.

  • Smart System Architectures
Smart Systems need Smart System Architectures. They are a combination of existing architectures based on Process-Models & Sensor-Systems (the Sensory-Motor System) and new architecture that are related to Visualization (the Imagination), Analytics (Data, Process, Text, Software-mining (Thinking)) and Social Networks. (Emotions). The end state is called Global Brain (“The Singularity”).
  • Disclosure of Open Data (Repository, Data Warehouses/Data Cards, Visualization)
A lot of Open Data is entering the market. It is not clear where this data is situated and what the data means. In this case we need a Repository and tools to define what part of the data we need at this moment (a Data Warehouse, Data Cards). We need also tools to hide the complexity of the current open databases and makes it possible to show what is available (Visualization).
  • Smart Innovation (Business-model generation out of Big- & Open Data) 

There are already many on-line tools available that support the innovation process. The data that is used in these tools (for instance market-data) has to be gathered to validate the models. We want to use the Data-sciences to gather appropriate market-data and detect interesting business-opportunities, To do that we want to use Big- and especially Open Data. 

  • Smart Value Chain Integration & Reversal (Product Configurators, Data/Proces/Software-Mining, Intelligent User-Interfaces)
The value-chains are integrating and the customer is moving into control. Currently a value chain consists of many companies that use many not integrated and not connected legacy-systems. These systems contain data and process-models. We want to use the Data-Sciences to detect these models (Process-mining. Data-Mining, Software-Mining) and map them to available domain-models (ACCORD, ARTS,…) or develop shared domain-models. The data that is used in the value-chain can be mapped to a product-configurator. Last but not least the user-interaction with the customers has te be designed.
  • Smart Education (Just-in-Time Education)
Technology is changing fast. This has a huge impact on education. The current system of education is not able  to react to the fast moving technology-waves because it is created  to support the Industrial Revolution that ended around 1950. We are now beyond the next wave that is taking over manual work and moving into the next step in which the Human Brian is copied. The solution to this all, Life Long Education,  is already invented a long time ago but we are not implementing this solution because the Current Education System is aimed at preventive education, trying to train people in the first stage of their life. The solution is Just-in-Time-Education, training people at the moment they need knowledge (DeepQA) or experience (Simulators).
  • Smart Buildings & Building Process (with ABN AMRO Dialogs House)
The Building Industry is already using 3D-Models (BIM). A BIM-model is a low-level product-configurator but the model can be moved to a higher level in which it is possible to share (and sell) complete Models (for instance a Hospital). The models can be used to simulate every attribute of a building and let the customers play with possible designs. The models can be used to calculate risk and most important optimize existing buildings (Facility Management). When we combine this with sensor-technology we are able to create adaptable buildings.
  • Smart Food Chain & Dietetics
The Food Value Chain will revert and the customer will be in control. At this moment the customers are educated with theories about food that are changing fast. “Good” Food is time, context and body-dependent. We want to make tools that use personal sensors that show the customer what they have to eat to stay healthy and tell them where and how they can buy this food.
  • Food & Health (with the International Alliance of Future Centers)
It is not clear what the impact is of Food on our Health. In this project we will analyze the complete food-chain.
  • Smart Urban Space (The Self Actualizing City)
This is an integration of the concepts of the Smart City, Integrated Value Chains, the Creative City and Smart Social Networks.
  • Next Generation Media

Media are at this moment based on the Sender/Receiver-concept. We want to use the Data Sciences to detect patterns, transform these patterns into text and images and implement a feed-back process with the customers.

  • Next Generation Theatre
We want to use the concept of  5D Film to create a new type of sensory and emotional experience.
  • Next Generation Energy Systems
Implementing Autarkic Energy Systems.
  • Smart Mobility
Using new concepts like the Self-Driving Car & Sensor-Networks  to optimize mobility in Big Cities.
  • Social Enterprises (Impact Finance, Social Bonds, Circular Systems, Cooperatives, Complementary Currencies)
The next generation enterprises has to be aware and reduce its impact on society and environment. In this project we also look the finance of Social Enterprises (Impact Finance, Social Bonds) but also at the circular economy, new organizational structures (cooperations, ) and new financial systems based on complementary currencies.
  • Conflict-Resolution/E-Mediation
Preventing and resolving conflicts by implementing Smart Mediation.
  • Future Jobs (Unemployement & Sense-making, with the International Alliance of Future Centers)
What are the consequences of Smart Systems on employment?
  • Smart Care Systems

We want to use Smart Systems to help people that need Economic, Physical, Social and/or Mental Support.

  • Virtual Future Centers (with the International Alliance of Future Centers) 
Future Centers are currently location-based. Future Centers look like Smart Systems because they Sense their environment, react (Process Manager) on the Events that are happening, try to predict what will happen (Analytics), involve Networks (Communitiies of Practice, Interest and/or Affinity) and Scientific Knowledge Centers (like the LCDS), create a shared Vision and  put this vision into Action (Entrepeneurs, Innovators, Incubators). There are already many people busy with developing parts of the Virtual Future Center. We are currently designing an Architecture and will create an Alliance to join the efforts of all participants.
  • Next Generation Secure Data Centers
Smart Systems need Smart Data Centers.

LINKS

The Presentation of Prof. dr. H. J. van den Herik about Big Data and the Leiden Center of Data Science.

 

About Morphology or How Alan Turing Made the Dream of Goethe Come True

Tuesday, November 17th, 2009

The Ancient Greeks believed that the images of waking life and dreams came from the same source, Morpheus (Μορφέας, Μορφεύς), “He who Shapes“.

The Science of the Shapes, Morphology, was created and named by Goethe in his botanical writings (“Zur Morphologie“, 1817).

Goethe used comparative anatomical methods, to discover a primal plant form that would contain all the others-the Urpflanze. Goethe being a Romantic Idealist hoped that Morphology would Unify Science and Art.

The Uhrplant shows itself also in the Lungs and Riversystems

The Uhrplant shows itself also in the Lungs and Riversystems

“The Primal Plant is going to be the strangest creature in the world, which Nature herself shall envy me. With this model and the key to it, it will be possible to go on forever inventing plants and know that their existence is logical”. Nature always plays, and from which she produces her great variety. Had I the time in this brief span of life I am confident I could extend it to all the realms of Nature – the whole realm“.

Goethe (wikipedia)

Goethe (wikipedia)

Hundred years later in the 1920s Goethe’s dream came true. Morphology moved outside Biology to other parts of Science due to the works of D’Arcy Thompson’s On Growth and Form, Oswald Spengler Morphology of History, Carol O. Sauer Morphology of Landscape, Vladimir Propp, Morphology of the Folktale and Alfred North Whitehead Process and Reality.

Goethe observed nature and reflected on similar structures. He believed that there was something behind this similarity, an archetypal plant.

According to Goethe the archetypal plant was the leaf (“While walking in the Public Gardens of Palermo it came to me in a flash that in the organ of the plant which we are accustomed to call the leaf lies the true Proteus who can hide or reveal himself in all vegetal forms. From first to last the plant is nothing but leaf“).

At this moment scientists know the reason why the leaf is the most important structure of the plant. It is a solar collector full of photosynthetic cells.

The energy of the sun provides the energy to transform water from the roots gathered by the leafs and carbon dioxide out of the air also gathered by the leafs, into sugar and oxygen. Plants are structures with many leaves. These leafs shield other leafs from collecting sunlight and water.

To solve this problem a plant has to optimize its structure to collect enough Sunlight and Water. The process of Optimization is not a Central Coordinated action. Every leaf tries to find the best place in the Sun on its own. This place determinates the growth of the next level of branches and leafs.

Goethe observed a pattern and deduced a structure, the leaf, the Uhrplanze. What Goethe really observed was not a Static Uhrplant but the Dynamic Process of the Branching of all kinds of leaves in all kinds of plants (Morpho-Genesis).

The leafs of the plants are not the main target of the morphogenesis of the plant. The visible External and the invisible Internal Forms or Organs are one of the many solutions of an equation with many variables and constraints. The optimal solution is reached by experimenting (“Nature always plays”).

Many solutions fail but some survive (Evolution of the Fittest). When a solution survives it is used as a Foundation to find new rules for more specific problems (Specialization). When the environment, the context, changes old rules have to be replaced by new rules (a Paradigm Shift).

The Fractal Geometry of Nature

The Fractal Geometry of Nature

New mathematical paradigms in the field of the Machines and Languages (Alan Turing, The Chemical Basis of Morphogenesis) and the Self-Referencial Geometry of Nature (Benoît Mandelbrot, The Fractal Geometry of Nature) have stimulated further investigation in the Field of Morphology.

In 1931, in a monograph entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems Gödel proved that it is impossible to define a theory that is both Self-Consistent and Complete. The paper of Gödel destroyed the ambitions of the Mathematicians at that time to define one theory that explains everything.

In 1936 Alan Turing produced a paper entitled On Computable Numbers. In this paper Alan Turing defined a Universal Machine now called a Turing Machine. A Turing machine contains an infinite tape that can move backwards and forwards and a reading/writing device that changes the tape. The Turing Machine represents every Theory we can Imagine.

Turing proved that the kinds of questions the machine can not solve are about its own Performance. The machine is Unable to Reflect about Itself. It needs another independent machine, an Observer or Monitor to do this.

It can be proved that Turing proved the so called Incompleteness Theorem and the Undecidability Theorem of Gödel in a very simple way.

eniac

The Eniac

In 1943 Turing helped to Crack the Codes of the Germans in the Second World War. At that time the first computers were build (Eniac, Collossus).

It was very difficult to Program a Computer. This problem was solved when Noam Chomsky defined the Theory of Formal Grammars in 1955 (The Logical Structure of Linguistic Theory).

When you want to define a Language you need two things, an Alphabet of symbols and Rules. The symbols are the End-Nodes (Terminals) of the Network of Possibilities that is produced when the Rules (Non-Terminals) are Applied. The Alphabet and the (Production- or Rewriting) rules are called a Formal Grammar.

If the Alphabet contains an “a” and a “p” the rules S→AAP, A→”a” and P→”p” produce the result “aap”. Of course this system can be replaced by the simple rule S→”aap”. The output becomes an infinite string when one of the rules contains a Self-Reference. The rules A→a and S→AS produce an Infinity String of “a’-s (“aaaaaaaaaaaaaaaaaa….”).

The system becomes more complicated when we put terminals and rules (non-terminals) on the Left Side. The System S→aBSc, S→abc, Ba→aB and Bb→bb produces strings like, “abc”, “aabbcc” and “aaabbbccc”. In fact it produces all the strings a**n/b**n/c**n with n>0.

The inventor of the theory of Formal Grammar, Chomsky, defined a Hierarchy of Languages. The most complex languages in his hierarchy are called Context-Dependent and Unrestricted. They represent complex networks of nodes.

A language where the left-hand side of each production rule consists of only a single nonterminal symbol is called a Context Free language. Context Free Languages are used to define Computer Languages. Context Free Languages are defined by a hierarchical structure of nodes. Human Languages are dependent on the context of the words that are spoken.

It is therefore impossible to describe a Human Language, Organisms, Organisations and Life Itself with a Context Free Computer Language.

Context Free Systems with very simple rule-systems produce natural and mathematical structures. The System A → AB, B → A models the Growth of Algae and the Fibonacci Numbers.

A Recognizer or Parser determinates if the output of a formal grammar is produced by the grammar. Parsers are used to check and translate a Program written in a Formal (Context Free) Language to the level of the Operating System of the Computer.

grammarRegular and Context Free Grammars are easily recognized because the process of parsing is linear (causal, step by step). The stucture of the language is a hierarchy.

The recognizer (now called a Push-Down Machine) needs a small memory to keep the books.

Context Dependent (L-systems) and Unrestricted Grammars are difficult to recognize or are not recognizable in practice because the parser needs a huge sometimes Infinite Memory or Infinite Time to complete its task.

To find the Context the Recognizer has to jump backwards and forwards through the infinite string to detect the pattern.

If the network loops the recognizer will Never Stop (“The Halting Problem“).

Turing proved that the Halting Problem is Undecidable. We will Never Know for Sure if an Unrestricted Grammar contains Loops.

The Rules and the Output of Unrestricted Grammars Change and never stop Changing. Our Reality is certainly Context Dependent and perhaps Unrestricted.

Parsing or Recognizing looks like (is similar with) the process of Scientific Discovery. A theory, a Grammar of a Context-Free Systems (“aaaaaaaaaaa…”) is recognizable (testable) in Finite Time with a Finite Memory. Theories that are Context Dependent or Unrestricted cannot be proved although the Output of the Theory generates Our Observation of Nature. In this case we have to trust Practice and not Theory.

cellular automata

A 3D Cellular Automaton

In 2002 the Mathematician Stephen Wolfram wrote the book A New Kind of Science.

In this book he tells about his long term Experiments with his own Mathematical Program Mathematica. Wolfram defined a System to Generate and Experiment with Cellular Automata.

Wolfram believes that the Science of the Future will be based on Trial and Error using Theory Generators (Genetic Algorithms). The big problem with Genetic Algorithms is that they generate patterns we are unable to understand. We cannot  find Metaphors and Words to describe the Patterns in our Language System.

This problem was adressed by the famous Mathematician Leibniz who called this the Principle of Sufficient Reason.

Leibniz believed that our Universe was based on Simple Understandable Rules that are capable of generating Highly Complex Systems.

It is now very clear that the Self-Referencial Structures, the Fractals, of Mandelbrot are the solution of this problem.

The Scientific Quest at this moment is to find the most simple Fractal Structure that is capable of explaining the Complexity of our Universe. It looks like this fractal has a lot to do with the Number 3.

It is sometimes impossible to define a structured process to recognize (to prove) a Grammar. Therefore it is impossible to detect the rules of Mother Nature by a Structured process. The rules of Mother Nature are detected by Chance just like Goethe discovered the Uhrplanze. Science looks a lot like (is similar with) Mother Nature Herself.

When a Grammar is detected it is possible to use this grammar as a Foundation to find new solutions for more specific problems (Specialization, Add More Rules) or when the system is not able to respond to its environment it has to Change the Rules (a Paradigm Shift). All the time the result of the System has to be compared with Mother Nature herself (Recognizing, Testing, Verification).

Turing proved that if Nature is equivalent to a Turing machine we, as parts of this machine, can not generate a complete description of its functioning.

In other words, a Turing machine, A Scientific Theory, can be a very useful tool to help humans design another, improved Turing Machine, A new Theory, but it is not capable of doing so on its own – A Scientific Theory, A System, can not answer Questions about Itself.

The solution to this problem is to Cooperate. Two or more (Human) Machines, A Group, are able to Reflect on the Other. When the new solution is found the members of the Group have to Adopt to the new solution to move on to a New Level of Understanding and drop their own Egoistic Theory.

Each of the individuals has to alter its Own Self and Adapt it to that of the Group. It is proved that Bacteria use this Strategy and are therefore unbeatable by our tactics to destroy them.

Turing proved that Intelligence requires Learning, which in turn requires the Human Machine to have sufficient Flexibility, including Self Alteration capabilities. It is further implied that the (Human) Machine should have the Freedom to make Mistakes.

Perfect Human Machines will never Detect the Patterns of Nature because they get Stuck in their Own Theory of Life.

The Patterns of Turing

The Patterns of Turing

The Only ONE who is able to Reflect on the Morphogenesis of Mother Nature is the Creator of the Creator of Mother Nature, The Void.

Gregory Chaitin used the theory of Chomsky and proved that we will never be able to understand  The Void.

The Void is beyond our Limits of Reason. Therefore the first step in Creation will always be  a Mystery.

At the end of his life (he commited suicide) Alan Turing started to investigate Morphology.

As you can see the Patterns of Alan Turing are created by combining many Triangels. The Triangel is called the Trinity in Ancient Sciences.

According to the Tao Tse King, “The Tao produced One; One produced Two; Two produced Three; Three produced All things”, which means that the Trinity is the Basic Fractal Pattern of the Universe.

In modern Science this pattern is called the Bronze Mean.

It generates so called Quasi Crystals and the Famous Penrose Tilings.

The Bronze Mean is represented by the Ancient Structure of the Sri Yantra (“Devine Machine”).

Goethe was not the real discoverer of Morphology. The knowledge was already there 8000 years ago.

LINKS

About the Observer and Second Order Cybernetics

A PDF About the Morphology of Music.

The origins of life and context-dependent languages

A Website About the Morphology of Botanic Systems

A Website About the Morphology of Architectural Systems

A Plant Simulator using Morphology

About Intelligent Design

The Mathematical Proof of Gödel of the Existence of God

About Bacteria 

About the Bronze Mean

About the Trinity

About (Software) Quality

Tuesday, January 20th, 2009

When I attended the University of Leiden Software-Development was in its infancy. In 1969 just a few people were programming for the simple reason that the amount of computers was very low. It took a lot of time (many weeks), intelligence and perseverance to create a small working software-program.

At that time the effect of a software-program on other people was very low. Software-programs were used by the programmers themselves to solve their own problems.

When User-Interfaces, Databases and Telecommunication appeared it became possible to create software for Many Non-Programmers, Users. The software-systems got bigger and programmers had to cooperate with other programmers.

When the step from One-to-Many was made in the process of software-development and exploitation, Software-Quality became on very important issue.

What is Software?

A Software-program is a sequence of sentences written in a computer-language. When you speak and write you use a natural language. When you write a computer program you use an artificial, designed, language.

The difference between natural and artificial languages is small. Esperanto is a constructed language that became a natural language. Perhaps all the natural languages were constructed in the past.

Software programs are very detailed prescriptions of something a computer has to do. The specifications of a software-program are written in a natural language (Pseudo-Code, Use-Case).

To create Software we have to transform Natural Language into Structured Language. The big problem is that Natural Language is a Rich Language. It not only contains Structural components but is also contains Emotional (Values), Imaginative ((Visual) Metaphors) and Sensual Components (Facts). The most expressive human language is Speech.

In this case the Tonality of the Voice and the Body Language also contains a lot of information about the Sender. When you want to create Software you have to remove the Emotional, Imaginative and Sensual components out of Human Language.

What is Quality?

According to the International Standards Organization (ISO), Quality is “the degree to which a set of inherent characteristics fulfills requirements“. According to the ISO the quality of a software-program is the degree in which the software-coding is in agreement with its specification.

Because a specification is written in natural language, Quality has to do with the precision of the transformation of one language (the natural) to another language (the constructed).

According to Six Sigma Quality is the number of defects of an implementation of the specification of the software.

Another view on Quality is called Fitness for Use. It is this case Quality is “what the Customer wants” or “What the Customer is willing to pay for“.

If you look carefully at all the Views on Quality, the Four World Views of Will McWhinney appear.

Six Sigma is the Sensory View on Quality (Facts), ISO is the Unity View on Software (Procedures, Laws, Rules) and Fitness for Use is the Social View on Quality (Stakeholders).

The last worldview of McWhinney, the Mythic, the View of the Artist, is represented by the Aesthetical view on Quality. Something is of high quality when it is Beautiful.

The Four Perspectives of McWhinney look at something we name “Quality”. We can specify the concept “Quality” by combining the Four definitions or we can try to find out what is behind “the Four Views on Quality”.

The Architect Christopher Alexander wrote many books about Quality. Interesting enough he named the “Quality” behind the Four Perspectives the “Quality without a Name“. Later in his life he defined this Quality, the “Force of Life“.

What Happened?

In the beginning of software-development the Artists, the Mythics, created software. Creating high quality software was a craft and a real challenge. To create, a programmer had to overcome a high resistance.

The “creative” programmers solved many problems and shared their solutions. Software-development changed from an Art into a Practice. The Many Different Practices were Standardized and United into one Method. The Method made it possible for many people to “learn the trade of programming”.

When an Art turns into a Method, the Aesthetic, the Quality that Has No Name, Life Itself, disappears. The Controller, Quality Management (ISO), has tried to solve this problem and has given many names to the Quality without a Name. Many Aspects of Software Quality are now standardized and programmed into software.

But…

It is impossible to Program the Social Emotions and the Mythic Imagination.

So…………

Software developers don’t use Methods and Standards because deep within they are Artists. The big difference is that they don’t solve their own problems anymore. They solve the problems of the users that are interviewed by the designers.

And…..

The Users don’t want the Designers to tell the Programmers to create something they want to create themselves (the Not-Invented Here Syndrome). They also don’t know what the programmers, instructed by the designers will create, so they wait until the programmers are finished and tell them that they want something else.

What Went Wrong?

The first Computer, the Analytical Engine of Charles Babbage, contained four parts called the Mill (the Central Processing Unit, the Operating System), the Store (the database), the Reader, and the Printer. The Analytical Engine and his successors were based on the Concept of the Factory.

In a Factory the Users, the Workers, The Slaves, have to do what the Masters, the Programmers, tell them to do.The Scientists modeled successful programmers but they forgot to model one thing, the Context. At the time the old fashioned programming artists were active, software was made to support the programmer himself. The programmer was the User of his Own software-program.

At this moment the Factory is an “old-fashioned” concept. In the Fifties the Slaves started to transform into Individuals but the Factory-Computer and the Practices of the Old Fashioned Programmers were not abandoned.

To cope with the rising power of the Individual the old methods were adopted but the old paradigm of the Slave was not removed. The Slave became a Stakeholder but his main role is to act Emotionally. He has the power to “Like or to Dislike” or “To Buy or not to Buy”.

The big Mistake was to believe that it is possible to program Individuals.

What To Do?

The Four Worldviews of Quality Move Around Life Itself.

According to Mikhail Bakhtin Life Itself is destroyed by the Process of Coding (“A code is a deliberately established, killed context“).

When you want to make software you have to keep Life Alive.

The Paradigm-Shift you have to make is not very difficult. Individual Programmers want to make Software for Themselves so Individual Users want to Do the Same!

At this moment the Computer is not a tool to manage a factory anymore. It has become a Personal tool.

It is not very difficult to give individuals that Play the Role of Employee tools to Solve their own Problems.

When they have solved their own problems they will Share the Solutions with other users.

If this happens their activities will change from an Individual Act of Creation into a Shared Practice.

If People Share their Problems and Solutions, their Joy and their Sorrow, they Experience the Spirit,  the Force of Life, the Quality that has no Name.

LINKS

How to Analyze a Context

About the Human Measure

About the Autistic Computer

About the Worldviews of Will McWhinney

About Christopher Alexander

About Computer Languages

About Mikhail Bahtin

About Ontologies

About Model Driven Software Development

About the Illusion of Cooperation

About the Analytic Engine of Charles Babbage

About the Analytic Engine of Thomas Fowler

About Human Scale Tools

About Old-Fashioned Programming

Essential Components of a Healthy Diet

Sunday, January 11th, 2009

Essential Components of a Healthy DietWith so much conflicting advice in the media, it can be difficult to determine the best way to eat healthily and stay in shape. For example, while some sources of information say that eliminating sugar and fat completely is the best way to stay fit, others suggest that the total amount of calories consumed is all that matters. To help clear up the confusion, the U.S. Department of Agriculture and the U.S. Department of Health and Human Services collaborated to create dietary guidelines for all people in the United States. Learn more about alpilean weight loss.

Tips for Proper Nutrition

Based on the most recent edition of the HHS and USDA’s Dietary Guidelines for Americans, all people should:

  • Limit the amount of refined grains, added sugars, cholesterol, trans fats, saturated fats, and sodium in their diet.
  • Consume more seafood, low-fat dairy, fat-free products, whole grains, fruits, and vegetables.
    Eat the appropriate amount of calories and engage in regular physical activity.

The Food Pyramid

According to the USDA, Americans need to eat a variety of foods in specific amounts in order to optimize their health. These foods include fruits and vegetables, grains, protein, and dairy products.

Fruits and Vegetables

A diet rich in fruits and vegetables reduces your risk of heart attack, stroke, and certain types of cancers. The USDA suggests filling half your plate with fruits and vegetables during every meal. Visit https://www.wtkr.com/brand-spotlight/best-weight-loss-pills.

Grains

Eating whole grains helps with weight management, reduces constipation, and may reduce the risk of heart disease. The USDA recommends making at least half of your grains whole grains.

Protein

Protein is important for good health, but it should be consumed in limited quantities. The USDA recommends consuming between 2 and 6 ounces of protein each day, depending on your age and gender. It’s also important to vary the types of meat you consume.

Dairy

Consuming dairy products reduces your risk of developing type 2 diabetes, cardiovascular disease, and osteoporosis. However, many dairy products are high in fat, which can cause weight gain and other problems. For maximum benefit, the USDA recommends switching to low-fat dairy products whenever possible.

In addition to eating the proper amount of food from each of these categories on a daily basis, the dietary guidelines also recommend eating as many whole, unprocessed foods as possible in order to minimize exposure to additives.

About Model Driven Software Development

Saturday, August 23rd, 2008

In the beginning of Software Development Programmers just Programmed. They did not use any method. The program was punched on a stack of cards and the computer executed the code. It took many days to get a small program running.

In the early 1980s text editors were introduced. In this stage somebody else called an Analyst wrote down Specifications and the Programmer transformed the specifications into a Program.

The Programmers and Analysts had to fill in forms with a pencil. The forms were typed by a central department and returned to them for correction. Much later programmers and analysts were able to use their own text editor.

The Specifications and the Programs were represented by many competing diagramming techniques like DFD (Data Flow Diagrams), JSP (Jackson), ERD (Entity Relationship Diagrams, Bachman), NIAM, Yourdon, Nassi Schneidermann and ISAC (Langefors). The Programmers and Analysts used Pencils and Plastic Frames to draw the Diagrams.

The data about the programs and the databases were stored in a Dictionary. A Dictionary is a System to store and retrieve Relationships. The Dictionary Software generated Copybooks that were included into the (COBOL) Programs. One of the most important Dictionary Packages was called Datamanager.

Datamanager used a so called Inverted File Database Management System. The Inverted File or Inverted Index is optimized to store and find Relationships.

At that time there were many types of Database Management Systems (Hierarchical, Network, Relational and Object). They were optimized for a special type of storing and retrieving data.

Between 1980 and 1990 the competing Methods and Diagram Techniques were fused and expanded to cover many domains of IT. The Dictionary (Datamanager) was also expanded to contain many more Relationships.

Around 1990 the process of integration was finally accomplished. At that time Information Engineering (IE) of James Martin was the most comprehensive Methodology available on the Market.

Texas Instruments implemented IE on a mainframe computer and called it IEF. IE was also implemented in IEW (Knowlegdeware) and Excellerator (Index Technologies). Computer Assisted Software Engineering (CASE) was born.

You have to understand that Graphic User Interfaces and PC’s were at that time in their infancy. It was impossible to manipulate diagrams. We used mainframes and Dumb User Interfaces (Forms) to define the models but we got a long way with it.

The big innovation came when IBM announced AD/Cycle in 1990. They created an Alliance with Bachman Information Systems, Index Technology Corporation, and Knowledgeware to create the most advanced Model Driven Software Development Tool ever made.

The kernel of AD/Cycle would be a complete new Repository based on the Relation DBMS of IBM called DB2.

At that time ABN AMRO was in a merger and we had the idea that an alliance with IBM would help us to create a new innovative development environment. I was involved in everything IBM was doing in its labs to create AD/Cycle.

The project failed for one simple reason. The Repository of IBM was never finished. The main reason was the Complexity of the Meta-Model of the Repository. A Relational DBMS is simply not the way to implement a Datadictionary (now called a Repository).

Another reason the project failed was the rise of Object Oriented Programming and of course the huge interference of Microsoft.

To save the project we had to find another repository and used the original Repository of Knowledgeware called Rochade. Rochade is still on the market. It is still a very powerful tool.

The introduction of the PC and the Activities of Microsoft generated a disaster in the development process of software. We had to move to square one and start all over again.

The Destructive Activities of Microsoft began by selling isolated disconnected PC’s to Consumers (Employees are also Consumers!).

At that time we did not realize this would cause a major drawback. We even supported them by giving all the employees of the Bank a PC, to Play With.

What we did not know was that the Employees started to Develop software on their own to beat the backlog of the central development organization. Suddenly many illegal (Basic) programs and databases appeared and we had to find a way to avoid Total Chaos.

The Solution to this problem was to introduce End User Programming Tools (4GL’s) like AS and Focus.

To provide the End Users with Corporate Data we had to develop Datawarehouses.

We were forced to create different IT Environments to shield the Primary, Accountable, Data of the Bank.

We had to develop a New Theory and Approach to support a completely new field of IT now called Business Intelligence.

We had to find a way to solve the battlefield of IBM (OS/2) and Microsoft (Windows) on the level of the PC Operating System.

We had to find a way to connect the PC to the other Computer Systems now called Servers. The concept of Client/Server was developed.

We had to find a way to distribute the Right Data on the Right Computer.

What Happened?

We were Distracted for about TWENTY YEARS and all what we where doing is Reacting on Technological Innovations that were Immature. We did not know this at that time.

The Big Innovation did not happen on the Level of the Method but on the Level of the Infrastructure. The Infrastructure moved from the Expert Level to the Corporate Level to the Consumer Level and finally to World Level. At this moment the MainFrame is back but the Mainframe is distributed over many Computers connected by a Broadband Network. We are finally back at the Beginning. The Infrastructure shows itself as a Cloud.

In every phase of the Expansion of the Infrastructure new Programming Languages were constructed to support the transformation from One level to the Other level. Every Time the Model had to be Mapped to another Target System.

The IBM Repository failed because the Meta Model of the Repository was much to complex. The Complexity of the Model was not caused by the Logical Part (The Technology Independent Layer) but by the Technical Part of the Model. It was simply impossible to Map the What on the How.

The only way to solve this problem is to make the What and How the Same.

This is what happened with Object Oriented Programming (OO). Object-Oriented programming may be seen as a collection of Cooperating Objects. Each object is capable of receiving messages, processing data, and sending messages to other objects. Each object can be viewed as an independent little machine with a distinct role or responsibility.

The rise of OO started in the early 1990s. At this moment it is the major programming paradigm. OO fits very well with the major paradigm about our Reality. That is why it can be used to Design (What) and to Program (How). OO comes with its own Method called UML.

What is Wrong with OO?

The first and most important problem is the problem of the Different Perspectives. Although OO fits with the Western Model of Reality, We (the Humans) perceive Reality in our own Way. Every Designer experiences another Reality and it is almost impossible to Unite all the Perspectives.

To Solve this Problem we All have to Agree on a Shared Model of Reality. This is mainly accomplished by defining Standard Models of Reality. The problem with Standard Models of Reality is that they are EnForcing a Certain Point of View.

Enforcing one Point of View to many People generates Opposition and Opposition starts a Process of Adaptation. The Major Effect is a very Complex Implementation of an Inconsistent Model of Reality. The What and the How are not the Same anymore.

OO is creating the Problem they want to Solve.

What to Do?

The long process of integration of the Methods until the 1990′s showed that there is one major issue that has to be resolved when you want to create software.

This Issue is called Terminology. Its main issue is to Define What We are Talking About. If we don’t agree about what we are talking about (The Universe of Discourse) we will always be talking about what we are talking about. We will create Circular Dialogues.

Eugen Wüster was the creator of the Science of Terminology. His activities were taken over by Unesco. It founded a special Institute to coordinate Terminology in the World called Infoterm.

There are four distinct views on Terminology:

  • the Psychological View

Concepts are Human Observations. They have to be based on Facts.

  • the Linguistic view

Concepts are the meanings of general terms. They have to be Defined.

Concepts are Units of Knowledge. They have to True.

Concepts are abstractions of kinds, attributes or properties of general invariant patterns on the side of entities in the world. They have to be Related.

Sadly, elements of all four views are found mixed up together in almost all terminology-focused work in Informatics today.

We are Confusing even the Science to avoid Confusion.

LINKS

About the History of Terms

About CASE-Tools

About the History of Terminology

 

 

Why Software Layers always create new Software Layers

Wednesday, March 26th, 2008

The IT-Industry has evolved in nearly 50 years. In that timeframe, it became the most influential business in the Industry. Everybody is completely dependent on the computer and its software.

The IT-Industry has gone through various technology waves. The waves generated integration problems that were solved by the construction of abstraction layers. The layers not only solved problems. They also created new problems that were solved by other layers. The effect of all intertwining layers is an almost incomprehensible, not manageable, software-complex.

The main reason behind this development is the architecture of the general-purpose computer. It was developed to control and not to collaborate.

Charles Babbage invented the first computer (the Difference Engine) in 1833. Babbage wanted to automate the calculation of mathematical tables. His engine consisted of four parts called the mill (the Central Processing Unit, the Operating System), the Store (the database), the Reader, and the Printer. The machine was steam-driven and run by one attendant. The Reader used punched cards.

Babbage invented a programming-language and a compiler to translate symbols into numbers. He worked together with the first programmer, Lady Lovelace who invented the term bug (a defect in a program). The project of Babbage stopped because nobody wanted to finance him anymore.

It was not until 1954 that a real (business-) market for computers began to emerge by the creation of the IBM 650. The machines of the early 1950s were not much more capable than Charles Babbage’s Analytical Engine of the 1830s.

Around 1964 IBM gave birth to the general-purpose computer, the mainframe, in its 360-architecture (360 means all-round). The 360/370-architecture is one of the most durable artifacts of the computer age. It was so successful that it almost created a monopoly for IBM. Just one company, Microsoft, has succeeded to beat IBM by creating the general-purpose computer for the consumer (the PC). Microsoft copied (parts of ) the OS/2-operating system of IBM.

The current technical infrastructure looks a lot like the old fashioned 360/370-architecture but the processors are now located on many places. This was made possible by the sharp increase in bandwith and the network-architecture of the Internet.

Programming a computer in machine code is very difficult. To hide the complexity a higher level of abstraction (a programming language) was created that shielded the complexity of the lower layer (the machine code). A compiler translated the program back to the machine code. Three languages (Fortran, Algol and COBOL) were constructed. They covered the major problem-area’s (Industry, Science and Banking) of that time.

When the problem-domains interfered, companies were confronted with integration problems. IBM tried to unify all the major programming-languages (COBOL, Algol and Fortran) by introducing a new standard language, PL1. This approach failed. Companies did not want to convert all their existing programs to the new standard and programmers got accustomed to a language. They did not want to lose the experience they had acquired.

Integration by standardizing on one language has been tried many times (Java, C-Sharp). It will always fail for the same reasons. All the efforts to unify produce the opposite effect, an enormous diversity of languages, a Tower of Bable.

To cope with this problem a new abstraction layer was invented. The processes and data-structures of a company were analyzed and stored in a repository (an abstraction of a database). The program-generator made it possible to generate programs in all the major languages.

It was not possible to re-engineer all the legacy-systems to this abstraction-level. To solve this problem a compensating integration-layer, Enterprise Architecture Integration, was designed.

The PC democratized IT. Millions of consumers bought their own PC and started to develop applications using the tools available. They were not capable to connect their PC’s to the mainframe and to acquire the data they needed out of the central databases of the company.

New integration layers (Client-Server Computing and Data-Warehouses) were added.

Employees connected their personal PC to the Internet and found out that they could communicate and share software with friends and colleagues all over the world. To prohibit the entrance of unwanted intruders, companies shielded their private environment by the implementation of firewalls. Employees were unable to connect their personal environment with their corporate environment.

A new integration problem, security, became visible and has to be solved.

It looks like every solution of an integration problem creates a new integration problem in the future.

The process of creating bridges to connect disconnect layers of software is going on and on. The big problem is that the bridges were not created out of a long time perspective. They were created bottom up, to solve an urgent problem.

IT-technology shows all the stages of a growing child. At this moment, companies have to manage and to connect many highly intermingled layers related to almost every step in the maturing process of the computer and its software.

Nobody understands the functionality of the whole and can predict the combined behavior of all the different parts. The effort to maintain and change a complex software-infrastructure is increasing exponentially.

The IT Industry has changed his tools and infrastructure so often that the software-developer had to become an inventor.

He is constantly exploring new technical possibilities not able to stabilize his craft. When a developer is used to a tool he does not want to replace it with another. Most developers do not get the time to gain experience in the new tools and technologies. They have to work in high priority projects. Often the skills that are needed to make use of the new developments are hired outside.

The effect is that the internal developers are focused on maintaining the installed base and get further behind. In the end, the only solution that is left is to outsource the IT-department creating communication problems.

After more than 40 years of software-development, the complexity of the current IT-environment has become overwhelming. The related management costs are beginning to consume any productivity gain that they may be achieving from new technologies.

It is almost impossible to use new technology because 70 to 90% of the IT budget is spent on keeping existing systems running. If new functionality is developed, only 30% of the projects are successful.

If the complexity to develop software is not reduced, it will take 200 million highly specialized workers to support the billion people and businesses that will be connected via the Internet.

In the manufacturing industry, the principles of generalization and specialization are visible. Collaboration makes it possible to create flexible standards and a general-purpose infrastructure to support the standards.

When the infrastructure is established, competition and specialization starts. Cars use a standardized essential infrastructure that makes it possible to use standardized components from different vendors.

Car vendors are not competing on the level of the essential infrastructure. The big problem is that IT-Industry is still fighting on the level of the essential infrastructure, blocking specialization.

To keep their market share the software has to stay in the abstraction framework (the general purpose architecture) they are selling and controlling.

A new collaborative IT-infrastructure is arising. The new infrastructure makes it possible to specialize and simplify programs (now called services). Specialized messages (comparable to the components in the car industry), transported over the Internet, connect the services. This approach makes it much easier to change the connections between the services.

The World Wide Web Consortium (W3C), founded in October 1994, is leading the development of this new collaborative infrastructure. W3C has a commitment to look after the interest of the community instead of business. The influence of W3C is remarkable. The big competitive IT-companies in the market were more or less forced to use the standards created by the consortium. They were unable to create their own interpretation because the standards are produced as open source software.

The basis of the new collaborative foundation is XML (eXtensible Markup Language). XML is a flexible way to create “self-describing data” and to share both the format (the syntax) and the data on the World Wide Web. XML describes the syntax of information.

XML has enabled a new general-purpose technology-concept, called Web-Services. The concept is comparable to the use of containers in intermodal shipping. A container enables the transport a diversity of goods (data, programs, content) from one point to another point. At the destination, the container can be opened. The receiver can rearrange the goods and send them to another place. He can also put the goods in his warehouse and add value by assembling a new product. When the product is ready it can be send with a container to other assembly lines or to retailers to sell the product to consumers.

Web-Services facilitate the flow of complex data-structures (services, data, content) through the Internet. Services, can rearrange data-structures, ad value by combining them with other data-structures and can send the result to other services.

All kinds of specialized data-structures are defined that are meant to let specialized services act on them.

An example is taxation (XML TC). XML TC (a part of the Oasis standards organization) focuses on the development of a common vocabulary that will allow participants to unambiguously identify the tax related information exchanged within a particular business context. The benefits envisioned will include dramatic reductions in development of jurisdictionally specific applications, interchange standards for software vendors, and tax agencies alike. In addition, tax-paying constituents will benefit from increased services from tax agencies. Service providers will benefit due to more flexible interchange formats and reduced development efforts. Lastly, CRM, payroll, financial and other system developers will enjoy reduced development costs and schedules when integrating their systems with tax reporting and compliance systems.

Web-Services are the next shockwave that is bringing the IT-community into a state of fear and attraction. Their promise is lower development cost, and a much simpler architecture. Their threat is that the competition will make a better use of all the new possibilities.

The same pattern emerges. Their installed base of software slows most of the companies down. They will react by first creating an isolated software-environment and will have big problems in the future to connect the old part with the new part.

Web-Services will generate a worldwide marketplace for services. They are now a threat to all the current vendors of big software-packages. In essence, they have to rewrite all their legacy-software and make a split in generic components (most of them will be available for free) and essential services users really want to pay for.

Big software-vendors will transform themselves into specialized market places (service-portals) where users can find and make use of high quality services. Other vendors will create advanced routing-centers where messages will be translated and send to the appropriate processor.

It will be difficult for small service-providers to get the attention and the trust of companies and consumers to make use of their services. They will join in collaborative networks that are able to promote and secure their business (The Open Source Movement). It is impossible to see if they will survive in the still competitive environment where big giants still have an enormous power to influence and a lot of money to create new services.

If the big giants succeed, history will repeat itself. The new emerging software-ecology will slowly lose its diversity.

Web-services are an example of the principles of mass-customization and customer innovation. All the software-vendors are restructuring their big chunks of software into components that can be assembled to create a system.

Small competitors and even customers will also create components. In due time the number of possible combinations of components that are able to create the same functionality will surpass the complexity a human (or a collective of human beings) can handle.

LINKS

About the Human Measure

How the Programmer stopped the Dialogue

How to Destroy Your Company by Implementing Packages

About Smart Computing

About Software Quality

About Meta-Models

About Software Maintenance

About Model Driven Software Development

About Programming Conversations and Conversations about Programming

About Mash-Ups

Thursday, December 20th, 2007
Another new hype-term is the Mash-up. A Mash-up is a new service, that combines functionality or content from existing sources.
 
In the “old’ days of programming we called a Mash-up a Program (now Service) and the parts of the Program Modules. Modules were reused by other Programs. We developed and acquired libraries that contained many useful modules.
 
They did not document the software and used many features of the operating system that interfered with other programs. The very old software programs created the Software Legacy Problem.
 
Another interesting issue that has to be resolved is Security. Mash-ups are a heaven for hackers and other very clever criminals.

When I look at the Mash-up I really don’t know how “they?” will solve all these Issues.

When everybody is allowed to program and connect everything with everything a Mash-up will certainly turn into a Mess-up. Many years from now a new Software Lecacy Problem will become visible.

There is one simple way to solve this problem. Somebody in the Internet Community has to take care of this. It has to be an “Independent Librarian” that controls the libraries and issues a Quality Stamp to the software (and the content) that is free to reuse. I don’t think anybody will do this.

Personally I think the Mashup is a very intelligent trick of big companies like Microsoft, Google and Yahoo to take over the control in software development. In the end they will control all the libraries and everybody has to connect to them. Perhaps we even have to pay to use them or (worse) link to the advertisement they certainly will sell.

To stabilize the software development environment we had to introduce many Management Systems like Testing and Configuration Management to take care of Software Quality.

The difference with today is that the software libraries are not internal libraries. They are situated at the Internet.

It took a very long time to stabilize the software development environment. In the very old days programmers were just “programming along”.

The Lost Construct in IT: The Self-Referencing Loop

Thursday, June 28th, 2007

Edsger Wybe Dijkstra (1930-2000) was a Dutch computer scientist. He received the 1972 Turing Award for fundamental contributions in the area of programming languages.

He was known for his low opinion of the GOTO-statement in computer programming culminating in the 1968 article “A Case against the GOTO Statement” (EWD215), regarded as a major step towards the widespread deprecation of the GOTO statement and its effective replacement by structured control constructs such as the DO-WHILE-LOOP. This methodology was also called Structured Programming.

(more…)