My interests

Currently I am developing two ideas. One concerns making government more effective in a complex civilisation. The other concerns managing data.

Making government more effective in a complex civilisation

Problem

I feel that UK government is not as effective as it should be and could be. Problems with government are frequently reported, diverse in nature, and mismanagement seems to be an important factor in them. Consequently I feel that ineffective government in the UK is systemic and probably due to fundamental features of its design.

I consider the effectiveness of government to be its overriding objective. By contrast the current form of UK government prioritises mechanisms to provide accountability. My interest is to establish what the most effective form of government is, rather than propose improvements to the current form of government.

Conjecture

Cause

Civilisation continually increases in complexity, so government must become more sophisticated to effectively manage it. As a result government itself also becomes more complex. Eventually the complexity of civilisation and government makes them impossible to orchestrate effectively using an increasingly deep authority hierarchy. Hierarchies give influence over matters to people who cannot have a sufficient understanding of them to make skilful decisions.

I believe that the evidential failures of UK government are symptomatic of the failure to manage complexity in civilisation and government. I have identified what I feel is the best complexity management pattern and note that it is not the basis of any current form of government. Accordingly I suggest a form of government be developed that is based on that pattern.

Proposal

Government should be a self-organising system of many independent specialisms conforming to rules of behaviour, rather than being orchestrated by a hierarchical system of authority. Each specialism should employ many collaborating specialists, or ideally experts. Authority should only be given to specialists and must not be organised into hierarchies. All specialists should have equal authority that is limited to their specialism. Dogma is not be allowed in government. Interdisciplinary observers and cross specialism groups help deliver the overview feature of hierarchical systems but without using hierarchical authority.

Naturally there are many details, but these main features provide the basis of a design for government that prioritises effectiveness.

Discussion

This proposal might seem surprising, but below are some observations that tend endorse my views:

  • An endless stream of examples of the failings of the UK government justifies the perception that it is not effective and is a result of mismanagement.
  • The UK population is increasing in size, and ethnic and cultural diversity. Advances in communication and travel have increased the number of connections between its individuals and groups. The amount of regulatory control of individual, groups, businesses and other organisations is increasing. Technological progress is extending and creating moral and ethical dilemmas. Together these factors and others increase the complexity of civilisation.
  • The size of the government must increase to manage increasing complexity, but a single group of people working on a complex task is less effective than multiple smaller groups working on simpler sub-tasks.
  • Specialists, or ideally experts, are more effective in their specialism than generalists.
  • Hierarchical authority structures concentrate authority toward the top of the structure, and increasing authority is correlated with its abuse, which apart from unfairness provides inherently suboptimal management.
  • Hierarchical authority structures shift decisions from those that understand a matter well to those who cannot have the time to understand it as well.
  • For a complex system like a country, self-organisation by specialisms and specialists working within a set of rules is more responsive to changing circumstances and needs than orchestration by a system of hierarchical authority.
  • Independence of authority by parts of a system, such as government, reduces its complexity and the propagation of problems. That independence is not possible with coordination by hierarchical authority.
  • Politicians do not achieve their positions because they are experts in anything they may be asked to deliberate on.
  • Political parties tend to organise their policies around a dogma or an ideology rather than empirical observations and other research into the best understanding.
  • Party political systems introduce conflict where collaboration is more useful. So even if a political party did have an optimal set of policies, another political party would claim it did not.
  • Having many voters does not correct for the intractable problem that no voter can understand the complexity of modern civilisation and the issues confronting government.
  • The issues confronting an entire government cannot be meaningfully summarised by any set of policies that could be communicated to an electorate.
  • The desire to govern and the desire to decide on who should govern are not a sensible basis for deciding who should govern.

Status

I have been developing a form or government from the understanding that complexity of government becomes too great for a system of hierarchical authority to orchestrate. It has many of detailed elements that need to be incorporated into an updated version of that website. I would welcome any ideas to further develop this form of government.

Events in the Eurozone have resulted in the adoption of quasi-technocracies in Italy and Greece. This is a tacit admission that their systems of government have failed to be effective. While their immediate causes are financial, I believe ultimately their problems are rooted in ineffective government due to primacy being given to universal suffrage rather than managing complexity. It is notable that their forms of government have a similar basis to the UK form of government.

Managing data

Problem

Data with variable form is correlated with difficulty in software engineering. This research is concerned with two particular cases of variable data form. Firstly, where all variants of data form are not completely known at the time of programming, such as for data received from other systems that were unknown at the time of programing. Secondly, where data form varies after programming, such as for requirements changes, and for data from other independently evolving systems.

Conjecture

Cause

The problem is rooted in dependency on formalising data abstractions (FDAs). A FDA is distinct from data but is used as a definition to create and manipulate data. This research is concerned with FDAs that are defined within programmes (explicitly or implicitly) so their information is available to programme compilers/translators and so may also be made available to runtime environments.

A typical example of FDAs that concern this research are classes defined in object oriented programmes. A typical example of FDAs that are not the concern of this research are database schemas. However, it is worth noting that FDAs defined within programmes may be derivative of and so dependent on other FDAs, e.g. an OO class being derived from a DB schema. Naturally the converse may also be true.

FDAs that are defined within programmes remove uncertainty of data form and so enable well defined and fast data manipulation by programme statements. However, they also reduce the adaptability of programmes to variable data form, requiring use of the edit-deploy cycle, which is increasingly expensive with increasingly complex and interconnected systems.

Solution

This research is an experiment in what can be achieved by removing FDAs from programmes to provide adaptability to variable data form. Eschewing FDAs from programmes means that data cannot be classified as structured data with respect to programmes. I propose a form of data that otherwise is as formalised and standardised as possible and so cannot be classified as unstructured. It is then a self-describing semi-structured data that is highly and consistently formalised and standardised. This sets it apart from data that is typically referred to as semi-structured, but which is not highly or consistently formalised or standardised. To highlight this important distinction it is referred to as semi-formalised data (SFD).

A semi-formalised data entity is assembled from parts that conform to standardised patterns and constraints, but is not defined within a programme by an FDA. This contrasts with a structured data entity that is described as a whole by an FDA defined within a programme. Therefore, parts of a semi-formalised data entity must be obtained by search rather than deterministic mechanisms used with a structured data entity. Knowledge of the standards and constraints used in SFD improves the outcomes of search. Many techniques can improve search outcome, but they typically require extra processing resources. As exponential improvement in processing continues, so viable uses for SFD expand. It is impossible to provide certainty of obtaining the correct data using search of SFD, so care needs to be exercised in how it is used.

SFD has potential in what has been described as the second economy in a talk by W. Brian Arthur, because it moves some complexity and activity from the human domain into automation. I am confident that the 'second economy' will be much more important than human interaction over the Internet, if it is not already, and I hope that SFD can contribute to that progression.

A combination of techniques can improve SFD search outcome accuracy to a point where inaccuracy is so rare that it can be managed. Ultimately, parallel treatment of search outcome options in decision trees should almost exclude inaccuracy. Parallelism provided by quantum computers will be very significant in development of that technique. Indeed it is possible to always have a range of possibilities that never resolve to a single choice.

This approach is also useful in situations where data form is knowable at the time of programming but it is not known if it will vary in form. In these situations the advantage is in weakening the knowledge of data form embedded in programme statements, and consequently how adaptable they are to potential change in data form. Also it reduces cognitive load in software development, as data form need not be a concern to the developer.

Context

Two of the most important concepts used by humanity are specialisation and standardisation. The former allows activities to be better performed than by generalists and is essential to modern human progress. The latter allows specialists to collaborate on more complex undertakings with many other specialists. None of the collaborators need to be known to each other at the outset because standards circumvent the need to create new bipartisan agreements for each undertaking. Thereby standardisation accelerates the advantages of specialisation. The progress of standards is inexorably linked to human progress.

The world's first standards body was the British Standards Institute which had its origins on January 22 1901 because of the realisation that a lack of standards was holding back certain aspects of the industrial revolution. The first meeting of what was then called the Engineering Standards Committee was on 26 April 1901. It is no surprise that human progress has been most rapid since standardisation began.

Given this understanding the value of standardising all aspect of data is obvious. My idea recognises that there are fundamental aspects of data that are not yet standardised. It also recognises that bipartisan arrangements are still common place in software engineering. It further recognises that it is difficult or impossible to standardise some kinds of data. This last point leads inevitably to the idea of ensuring as many fundamental aspect of the data as possible are standardised. This semi-standardisation or semi-formalisation as it is termed until standardised is the necessary compromise.

Need

It is important to be able to evolve data form and programmes more quickly, and be adaptable to the consequences of that evolution in other systems. Current limitations on evolution and adaptability are hindering progress in software engineering, in particular by preventing new classes of solution from being developed. For example, rather than creating connections to specific services, opportunities exist for connections to classes of service that are not amenable to shared FDAs.

Rationale

The more knowledge about data form that is embedded in a programme, the less adaptable are the data and programme statements that depend on that knowledge. By making data and programme statements dependent on standards that can be combined in limited ways, it is possible to have constrained flexibility that can be used by search that is also aware of those standards. Combined with other techniques, this semi-formalised data can be used to address the research problem.

Software engineering using meta-models is a well-established data modelling technique. It allows for varied data form but needs more computing power to be as performant as systems that manage only data with invariant form. In addition, some data forms may not be known at the time of programming. This introduces the possibility that behaviour may not exist to correctly handle some variants. In database meta-models the schema FDA becomes data for another schema FDA that is fixed with respect to behaviour created to use it. In Relationship Oriented Programming relationships are moved from definition by class FDAs in programmes to assemblies of instances of other class FDAs that are fixed with respect to behaviour created to use them. As the computing power to cost ratio continues improving, so the performance issue becomes less of a concern. However, the problem of uncertainty of data form seems intractable; because by design behaviour that manipulates data needs to be defined before everything it is necessary to known about data form can be known, even though the meta-model provides consistent access to the data model.

SFD uses a strategy that is similar to the meta-model strategy, in that data form is not defined directly by FDAs, but is ultimately held within another data model. However, it differs in that data exists as assemblies of elements that conform to fixed (and ideally standardised) patterns. Assemblies need not be fixed, but all patterns are fixed with respect to behaviour created to use them. This constrains data form more than in a meta-model system and so increases certainty about data form. In a programme instances of those fixed patterns are defined by FDAs which are then similar in a sense to a meta-model. SFDM can employ a number of hidden techniques to mitigate the uncertainty of data form, but importantly exploits the extra information inherent in conformance of data to fixed patterns. However, the uncertainty of data form can only be minimised, not excluded, so SFDM is not an appropriate technique for all circumstances.

Use of meta-data has been increasing. For example, heap memory is now commonly managed by runtime environments using meta-data. There has also been an increase in the binding of meta-data with data to document its FDA. This is often referred to as reflection. SFD is self-describing data so it also includes meta-data. It must include information about which patterns it conforms to as well as other meta-data. The actual FDAs that implement those patterns are an implementation detail and so are not bound to the data. Therefore, SFD is documented with meta-data at the conceptual level rather than the implementation level. This allows data created within one programming language to be used within programming languages with completely different native representation of data, so long as they are able to implement the patterns in some way.

Corollary

Although the original motivation for this research was to advantage software engineering, SFD developed into something similar in some ways to Linked Data. In both, instead of data existing as discrete entities it exists as a network of linked data fragments. The most important non-technical difference between SFD and Linked Data is that Linked Data depends more on devolved definitions of meaning, where SFD tries to maximise use of standards. The SFD approach was chosen because standardisation is one of the most powerful concepts ever developed by mankind. The most important technical difference between SFD and Linked Data is that SFD is designed to advantage software engineering, so it should be easier to use in software.

Status

I have developed and tested some library code in C# that uses this principle. I intend to continue to develop the concept hopefully to the point where it can be used in software products. I would welcome any interest in helping to further develop this concept and find applications for it.

Observations

New meta-model schemes continue to emerge, such as Relationship Oriented Programming. This trend will continue as the amount of computing resources available increases.