At Ontonix we have developed a comprehensive complexity metric and established a conceptual platform for practical and effective complexity management. The metric takes into account all the ingredients necessary for a sound and comprehensive complexity measure, namely structure, entropy and data granularity, or coarse-graining. The metric allows one to relate complexity to fragility and can show how critical threshold complexity levels may be established for any given system. The methodology is incorporated into OntoSpace™, a first of its kind complexity management software developed by Ontonix.

There doesn’t exist a widely accepted definition of complexity. Many of the popular definitions refer to complexity as a ”twilight zone between chaos and order”. It is often sustained that in this twilight zone Nature is most prolific and that only this region can produce and sustain life. Clearly, such definitions do not lend themselves to any practical use since they don’t provide any measure or quantity. We define complexity as amount of structured information. The evolution of living organisms, societies or economies constantly tends to states of higher complexity precisely because an increase in functionality (fitness, or ability to process information) allows these systems to “achieve more”, to face better the uncertainties of the respective environments, to be more robust and fit, in other words, to survive better.

Complexity is not a phenomenon, it is not emergence, as it is often stated in popular-science books. It is a quantity which can be measured, such as mass, energy or frequency. In order to increase our understanding of complexity and of the behaviour of complex systems, and in order to favor the development of the science of complexity, it is paramount to establish rigorous definitions and metrics of complexity. The common misconception about complexity is that it is often equated to emergence. Emergence of new structures and forms is the result of re-combination and spontaneous self-organization of simpler systems to form higher order hierarchies. Amino acids combine to form proteins, companies join to develop markets, people form societies, etc. This is not complexity.

A fundamental property of the complexity metric developed by Ontonix is that it has bounds – a lower and an upper bound. This means that it cannot assume infinite values (both negative and positive). If it could it wouldn’t be a good metric.

All good metrics satisfy the laws of physics

For example, a temperature cannot go to infinity, or the density of a material cannot be negative. The same applies to complexity. For every given system it can vary between a lower and upper bound. Both values are positive (or equal to zero). These bounds have a very nice physical meaning. Close to the lower bound, structure dominates their dynamics – systems are more predictable and easier to fathom. In proximity of the upper bound – called also critical complexity – entropy dominates and structure is weak. It is never a good idea to function close to one’s critical complexity.

A system cannot be more complex than its critical complexity

Unless one adds more structure. For example, you can only put that many employees in an office building, unless you add new floors. If you don’t add new floors but you add employees, the company will suffocate itself to a stop. This is the key reason one should stay away from critical complexity:

In proximity of critical complexity systems become very fragile

This means they can break suddenly, in addition to being inefficient and chaotic. But the really bad news is that at high levels of complexity system can often behave in a non-intuitive manner, or suddenly jump from one mode of behaviour to another.

The plots in the figure below illustrate examples of closed systems (i.e. systems in which the Second Law of Thermodynamics holds) in which we measured how complexity changes versus time. We can initially observe how the increase of entropy actually increases complexity – entropy is not necessarily always adverse as it can help to increase fitness – but at a certain point, complexity reaches a peak beyond which even small increase of entropy inexorably cause the decay of structure. The fact that initially entropy actually helps increase complexity confirms that uncertainty is necessary to create novelty. Without uncertainty there is no evolution.

In our metric, before the critical complexity threshold is reached, an increase in entropy does generally lead to an increase in complexity, although minor local fluctuations of complexity have been observed in numerous experiments. After structure breakdown commences, an increase in entropy nearly always leads to loss of complexity but at times, the system may recover structure locally. However, beyond the critical point, death is inevitable, regardless of the dimensionality or initial density of the system.

Not everything that counts can be measured and not everything that can be measured counts. This variation of a quote by Einstein is often used as an excuse in order to confine things that may be quantified into the realm of the intangible. The act of measurement is not just laborious. It is also risky. The risk stemming from a measurement lies in responsability. The moment you put a number on the table you are accountable. This is why some people prefer to talk, chat, speculate, gossip or babble, avoiding concrete verifiable statements. There are certain fields of science in which this is particularly popular.

But beyond the issue of accountability and responsability, without a metric there also lies the faculty of being able to conjure up your own unverifiable “theories” on matters of science. In such a mindless context debating ideas and concepts is like disputing with someone the validity of horoscopes. This is because such “theories” cannot be validated. A theory is something that can be tested by means of an experiment. But without numbers it is difficult to set up an experiment. A theory often produces a theorem, an equation, a characteristic constant. Think of the theory of gravity by Newton. It allows one to measure the force of gravity exerted by a body on another. This means there is an equation. The theory hinges on the gravitational constant, G. The theories of relativity or electromagnetism make use of another known constant, c, the speed of light. There exist experiments which allow us to validate these theories. We could mention experiments by Cavendish, or Michelson-Morley. But when a “theory” does not have a theorem, an equation of some sort or a characteristic constant, or a metric, it cannot be called a theory. The whole point of science is to build theories which can help understand and explain natural phenomena and this involves the acts of measurement and of classification (ranking). Without a metric, one cannot do serious science. Serious science starts when you begin to measure.

One good example of this new age pop-science is complexity or the so called “complexity science”. Every person I confront on complexity has his own definition – which, evidently, never includes a metric – and his own views on the wonderful properties of complexity. Some people equate complexity to entropy. Some to chaos. Others say complexity is uncertainty. There are those who say complexity is a process of spontaneous self-organization on the edge of chaos. Some even say that complexity cannot be measured. The list is endless. At the end of the day, no good definition, no metric. Clearly, a good definition hints a metric, so if the definition is wrong, you can say goodbye to any quantitative work.

Numerous complexity centers around the world claim to conduct research in complexity. In the majority of cases the research that is performed is, in reality, the investigation of a wide class of (interesting) physical phenomena, in particular those that entail some sort of self-organization, aggregations, or collective behaviour of systems of numerous autonomous agents that cannot be deduced from the properties of a single agent. It is claimed that a system which emerges spontaneously and which exhibits behaviour that cannot be extrapolated from the properties of a single agent, is a complex system. Thus, systems such as:

  • forests (formed of trees)

  • societies (formed of people)

  • markets (formed of companies)

  • galaxies (formed of stars)

  • oceans (formed of water molecules)

  • storms of starlings……

are said to be complex systems. However, if one studies nature (i.e. physics) one realizes that everything in the Universe, at all scales, forms spontaneously from numerous smaller building blocks and without any external choreography (except that of the laws of physics). Therefore, according to this “definition”, eveything in Nature is a complex system. What advantange emerges from adding a new name to classes of well known physical phenomena is unclear. It is like saying that zoology is the study of non-human animals. Yeah, sure, so what?

However, if by studying, for example, ants, storms of starlings or other ensembles of autonomous (or not so autonomous) agents one still wants to claim that he is doing “complexity science”, this is the way it could be done:

  • Measure the complexity of a single agent

  • See how this maps onto the complexity of the system of agents. This means measuring this complexity, of course.

  • Measure the critical (maximum) complexity of each agent and see how it maps onto the critical complexity of the system.

  • Establish a relationship between the complexity of a single agent, number of agents and the complexity of the system of agents.

  • Extract modes of functioning of the system. Establish the most likely modes of functioning and their complexity.

  • Measure the resilience of the systsme of agents as a function of the complexity of each agent.

  • etc.

These are the sorts of things we at Ontonix do with proteins, investment portfolios, computer chips, market sectors or software on a car. Do you?

When you measure you make a giant leap – from opinions to science. By the way, our Quantitative Complexity Theory does have equations. One is this:


In the above equation S stands for structure, E represents entropy, σ is a spectral norm operator and “○” is the Hadamard matrix product operator. Complexity not only captures and quantifies the intensity of the dynamic interaction between Structure and Entropy, it also measures the amount of information that is the result of structure. In fact, entropy – the ‘E’ in the complexity equation – is already a measure of information. However, the ‘S’ holds additional information which is contained in structure. In other words, S encodes additional information to that provided by the Shannon equation.

You can find numerous articles on complexity and how we measure it in our blog.

PS. If you have a real theory, i.e. one which can be verified (or falsified) on empirical grounds, it already is quantitative. This means that “Quantitative” in “Quantitative Complexity Theory” is redundant. The reason we call our theory quantitative is simply to separate ourselves from the “qualitative” mainstream “complexity science”.

PPS. If you want to have some fun, read this!