Monday, September 30, 2013

The real reason most software fails

To my mind there is a problem in academia to do with where disciplines like 'computer science' (CS) and applications of computing sit.Pure computer science is the study of computational theory and applications. It develops things like data structures, algorithms, models of parallel computing and much else relating to computation as an object of study. Things like bio-informatics, financial computing and avionics, just to name a few, aren't usually thought of as proper 'science', but as some sort of 'applied' form of pure CS. Somewhere in the middle is 'software engineering'.



However, in my view, the offspring of CS were never properly conceived as disciplines, but rather artisanal pursuits.In this post I have a look at a few domains and try to show how disciplines derived from computing should be understood as both proper science and engineering. In doing so, I came up with the schema above, as a way of establishing the key concepts. There's nothing on this diagram that can't be found in a dozen books on philosophy of science. ButI like diagrams.




THE GENERAL PICTURE



For a discipline to be understood properly, we need to know its object or FOCUS OF ENQUIRY. What, in the real world is it interested in? Then, for science to occur, we need a programme of OBSERVATION and ANALYSIS, in order to come up with some conceptualisation of what we see - MODELS. Models are formal artefacts, and are expressed in one form or other of MATHEMATICS, and indeed, mathematics of often created or adapted specifically for the domain.



Doing science is largely about finding models (which are equivalent to hypotheses), and refining them via ongoing investigations. As the models develop, new mathematics often has to be created to formalise them. These models form part of the THEORIES that emerge in the domain. Theories explain the phenomena and make predictions about what might be seen. Good theories have good 'explanatory and predictive power', as philosophers of science call it. Bad ones fail in the light of contradictory evidence, or conflicts with other theories.



A theory isn't just a model (i.e. equations / formulae), it needs to be more. It's a description of something, and to work well, it needs words, metaphors and images. This can be a real challenge, but many consider it crucial for purposes of human communication and teaching, as well as further development of the domain.



The respective viewpoints of Stephen Hawking and Roger Penrose on this are instructive: Hawking is a logical positivist - if the model (i.e. equations in Quantum Mechanics) fits the observations, it more or less is the theory. Penrose, a scientific realist would demand that models would fit into a theory that describes the connection of the models with reality in a holistically satisfying way. Personally I prefer the latter. Why? There have been many examples of models that fit all the known observations (e.g. ideal gas law, Newtonian mechanics) but turn out to be only approximations or special cases of a different model of reality when later observations are made.



Science usually takes as its goal successful theories, ones that either predict further deep truths about our world satisfying curiosity), or ones that can be applied reliably in some way to change the real world, for example to cure polio, build high-speed trains and create the internet.



This is where we encounter engineering, healthcare, and other applied disciplines that function like them - they systematise the making of artefacts (skyscrapers, mobile phone networks) and the management of aspects of nature we want to control (agriculture, human health).



In sum, we could say that (good) science uses mathematics and language to create models and descriptions of reality.



A SCIENCE EXAMPLE - QUANTUMPHYSICS



It's useful to consider the domain of Quantum Physics, which has as its object of enquiry phenomena at the sub-atomic scale. Here is a version of the schema above for this domain:



I just chose a few semi-arbitrary examples of the relevant mathematics (Hilbert spaces), models (the wave function), theories ('interpretations', of which there are ). and some of the metaphorical language (probability clouds) and visualisations (Feynman diagrams) underpinning some interpretations.



What can we learn from Quantum Physics as a discipline? Firstly, the object of study is exceedingly difficult to understand, and remains so to this day, to the point where some scientists speak seriously of parallel realities to account for some quantum phenomena, while others think that the presence of human consciousness in experiments changes not only the experiment but the structure and meaning of theories. Nevertheless, a century of careful observation and thinking have produced a marvellous array of theories and models. provides a wonderfully readable history, and arguably provides the definitive exposition of the field to date (at least from the realist viewpoint).



As is clear from these and many other publications in the field, there is a very strong relationship with mathematics. Existing mathematical concepts and methods are used, but in addition many have been adapted (e.g. quantum form of Lagrangian, Hamiltonian), and others created new (e.g. the Dirac notation). The result is a rich mathematical language, specific to the Quantum Physics domain, used to express the 'models' of quantum theory.



As I noted above however, equations on their own are not enough. Both of the above-mentioned publications and many others describe the not just mathematical models, but also go to some length to provide prose descriptions, illustrations and metaphors to make the models comprehensible. These appear to be indispensable to the progress of quantum physics and probably to human beings in general.



So here we have science at its best: a horribly difficult aspect of reality to try and understand, an extensive use of 1) mathematics, coupled with 2) language, images and metaphor to construct 3) models (key equations) and 4) theories (descriptions) of this reality, based around the equations. None of these 4 elements can be dispensed with without weakening or destroying Quantum Physics as a discipline. Even with the controversies in quantum theory (e.g. the debate about whether string theory is a blind alley), the discipline is set out in such a way that battles can occur in appropriate ways, i.e. by reference to the mathematics, and interpretive aspects of the theories.



AN ENGINEERING EXAMPLE - ELECTRICAL ENGINEERING



I'm going to use 'elec eng' as the next example, because it's the primary discipline I graduated in some aeons ago. Electrical Engineering is principally about building electrical systems and devices, which come in major categories including: power systems, control systems, digital communications, analogue electronics, and computational machinesmodels and descriptions of their behaviour are developed; pretty soon new objects can be engineered using these principles, e.g. real filters for stereo equipment, stereo speakers and so on.



CAN YOU DO ENGINEERING WITHOUT SCIENCE?



Personally, I think the answer is no. Any engineering endeavour needs to understand two things: the properties of the things it intends to make (cars, heart pace-makers,) and secondly, the context into which the thing is to be deployed (cities, hearts ). A poor understanding of the first leads to artefacts that fail in unexpected and sometimes catastrophic ways.



Thein the early 1950s started to fail catastrophically in mid-air, due to a lack of understanding of fatigue in fuselage materials. Theon Puget Sound collapsed in 1940 due to resonance caused by high wind - a key property of suspension bridges that had not been hitherto understood.



A poor understanding of the deployment context can create new unexpected and wide-ranging problems. Cars generally work well in terms of their design parameters, but their use creates huge traffic problems and road fatalities. Fossil fuel power stations make power, but also CO2 and sulphur dioxide, and may yet prove to be a central of environmental devastation.



In general, the first kind of failure seems relatively rare. This is because there is an unavoidable cycle of development and an unavoidable expense in manufacturing, and engineers are more or less forced by these constraints to understand, design and test concepts to death, and also to include 'safety factors', 'redundancy' and other fail-safe concepts, 'before the concrete is poured', as engineers say. This is why the first 747 flew and the Hoover dam is still there today.



Failures of the second kind are rarer when the context of deployment is very well defined, e.g. the human body into which will be a pacemaker will be inserted. When the context is sociological (think: cars and cities) then 'failure' of some kind is the norm, but is rarely catastrophic, usually coming under the heading of 'unforeseen consequences'.



I think things are getting better, due to routine observations of the use of engineered objects in our lives over the last couple of centuries. Computer engineers now think about issues such as monitor radiation, noise, heating, as well as ergonomics of use. This is only happening because both the manufactured object and its deployment context are taken as serious objects of study. The most successful attempts are nearly always by companies who spend significant resources on 'quality', rather than sticking with 'bare production'.



COMPUTER SCIENCE AND SOFTWARE ENGINEERING



So far so good. What about computing and software development?In pure computer science, the object of study is typically a computing artefact, e.g. data structures, or a whole system of software. The models are (or should be) developed in mathematical logic,la Knuth and Djikstra. There are good theories to do with computability, data structure logic, parallel computing and so on. The about scheme could be drawn out in a reasonable way(s) for computer science.



But when it comes to software engineering, as performed in any domain application of computing, problems emerge. The schema above appears to break down. This is primarily because what is being engineered - software - appears to be essentially the same as any 'models' that might be developed. Additionally, there is often a failure to realise that there really has to be a focus of enquiry (i.e. the problem domain) that must be understood properly. Instead, superficial inspections are made via methods such as 'use case engineering', as a precursor to building 'models' which are generally understood as precursors to the built software.



There is abundant evidence. I won't provide a survey, but it is well known that a high proportion of IT projects fail, and the reasons are almost always a combination of:



* failure to understand the problem space



* lack of understanding of the properties of the proposed artefact to be constructed, including



* (commonly) a complete lack of understanding of its interior complexity, leading to costs skyrocketing and missed deliveries

* a lack of understanding of the behaviour of the installed system



* a lack of understanding of the operational context - user reaction, performance, security and many other factors.



Although it is decades since I did my own software engineering courses at university, I don't have the impression (going by, for example, published books) that much has changed. Software engineering in my view, largely fails to take the following things scientifically: the problem space, the system or component to be built, and the operational context. Instead, a variety of weak methods try to 'gather requirements' by methods such as 'use-case engineering', and 'models' are developed essentially in diagrammatic form (e.g. UML) and then thrown away.



A few well-understood formalisms are routinely deployed, such as relational database theory. Things that should be taken seriously in modern object-oriented software engineering, such as type systems, contracts, concurrency and systems theory are only weakly represented in the mainstream programming languages. Things of even greater importance - computable ontologies - are non-existent in mainstream software development.



THE COLLAPSE OF THE (APPLIED) SCIENCE FUNCTION



This has all come about because of the conflation of the object of study, models, and operational (built) artefacts all being



* the same thing (software),

* digital and

* having (today) almost no barrier to entry.



The consequence of the first is that the boundaries in the schema above disappear, and the science part generally doesn't get done - everything just becomes 'building software'. The result of the second fact is that the granular manufacturing cycle time constant for physical objects disappears - you can always deliver 'today's build' of the software. Accordingly, the hard constraints that force designers of the 747 or 60-storey building to 'get it right' before production appear to be largely missing. 'No barrier to entry' means that anyone literally can start writing software, and in many cases, real systems are built by programmers with no formal knowledge of programming, let alone analysis or 'science'; the result is often huge amounts of throw-away code. (Interestingly, some of the best developers are people with no CS degree, but with a good background in formal methods and/or science as a discipline).



Considered in this light, it's amazing that any software project succeeds.



One exception is realtime control system software, where the consequences of failure are essentially the same as for bridges collapsing and planes breaking up mid-air. These kinds of systems are expensive to produce, and are built on a more traditional engineering cycle rather than today's typical software build cycle.



THE AGILE RESPONSE: CAPITULATION



The ultimate proof in my mind of the failure of software development to take a scientific approach is the prevalence of 'agile' methods today. The describes a kind of developer-sensitive and customer-caring soft - ware engineering. Some of these ideas are good in and of themselves. But read in toto, it's also impossible to see past what the manifesto is really evidence for: the fact that THE AGILE COMMUNITY NO LONGER TREATS SOFTWARE DEVELOPMENT AS A TRACTABLE CONCERN at all.



Instead, each software project is treated like an unknown adventure into a new land, and the only promises made are to call the customer with updates every couple of weeks, and to be nice to each other in the team. Agile fundamentalists appear not to believe in design at all; instead they claim that by constant refactoring, the appropriate design will emerge. This is essentially saying that software development is a mutation & natural selection process executed by people on program code, which will (hopefully) generate the right solution in the end. We could call the mammalian eye a 'success' in similar terms, but let's not forget it took over 3 billion years to get to it from bacteria



My aim here isn't a polemic against agile methods. The point is that I think WE HAVE TAKEN A FUNDAMENTAL WRONG TURN IN APPLIED COMPUTING, and a re-orientation is needed. In fact, I would say that no turn has been made at all; applied computing has been allowed to develop completely organically, with no oversight or interest in its structure.



I should point out that the serious computer scientists of the last 50 years - those who formulated classical notions of algorithms, software engineering, quality and semantics - did in fact create a worthy canon of learning. I am talking of people such as Donald Knuth, Edgar Djikstra, Barry Boehm, Niklaus Wirth, Bertrand Meyer, David Parnas (Bertrand I know, and I had the pleasure of meeting Barry Boehm and David Parnas ). However, the go-fast world of the young software developer seems to be almost unaware of their existence or wisdom. We need a new generation of serious scientists like these. There is much work for them (see below).



HOW TO FIX THINGS? START DOING SCIENCE AGAIN



It's not that hard how to see how we should do things properly. Since I have been working in the bio-

* deeply understand the thing to be built, e.g. a health record system;

* develop formal models of relevant aspects of both;

* describe some proper theories based on the models;

* commit to the theories within the domain, and treat them in a scientific manner - i.e. constantly test them against evidence; determine their performance as a basis of design;

* treat the building of health information systems as distinct from the models which underpin the theories on which such systems are based.



There are in fact many extant threads of work that could be tied together to make this happen:



* biomedical ontologies, e.g. ; which aim to provide various levels of computable description of the biomedical and clinical domains;

* terminology development e.g. ;

* computable guidelines, including development of languages and logic for representation of diagnostic decision graphs and therapeutic processes;

* clinical decision support research and formalisms;

* formal models of the health record that incorporate a concept of 'context', e.g. ;

* formal models of domain content, e.g. ;

* and many more.



The essential problem isn't that noone has thought deeply about aspects of the domain, it is that there is no coherent programme to put a canon of theory together that would function as a basis for thinking about health computing, or building health information systems.



Doing so would initially require an exercise in which the domain and its many sub-domains are mapped out and formally related. For example, a standard way is needed of connecting ontologies and terminologies to nearly every other item on the above list. This won't happen while the work proceeds in splendid isolation. It would require much subsequent work in order to get layers of dependable theory working together, and make them usable for real world application as well as further study. I know for example that the openEHR EHR model (which I have worked on for 12 years) needs to be properly integrated with Referent tracking and ontologies.



This is not currently occurring in an organised way, and one piece of evidence, about which I have is the bizarre situation that people trying to develop standards for health information are actually trying to do the primary work (generally without the necessary formal background), due to lack of an accepted canon of theory and models.



My belief (after some 17 years in the health informatics domain) is that academia needs to treat applied computing as first class scientific and engineering disciplines, not backwater departments hanging on the coat-tails of other supposedly real disciplines; in the case of health / bio-informatics - electrical engineering, medicine or computer science.



Many of the endeavours mentioned above have excellent exemplars floating around in these under-appreciated academic departments, as well as in companies and hospital IT departments. It's the connections that are missing.



WHAT COMPUTER SCIENCE DEPARTMENTS NEED TO DO



One of the global weaknesses is the failure in CS education to make mathematical logic and language theory a central part of the curriculum. In my day it was an option, and I believe today is still treated as a post-graduate option. The failure to take formalisms seriously has led to the absurd situation where the de facto 'formalism' for designing software systems is UML, which for 20 years has remained little more than a diagramming notation. To this day I know of no software engineer who uses it for anything more on any serious project. That's because it is full of basic holes (typing, generics, unclear notion of inheritance). It's 'formal counterpart', XMI, is humanly unreadable or writable, and exists in so many variants it's not reliably computable either. This situations needs to be fixed.



Following that, the concept of building real world semantics into computer code (classes, database schemas) needs to be jettisoned. It's the very reason why large systems can't keep up with requirements. I would argue that there is no system - no airline booking system, no conference booking system, no banking system, and certainly no health record system (all textbook favourites) - that can be conceived of as a 'single level model'. That means throwing out all those misleading object-oriented and relational 'modelling' textbooks.



Next, a multi-level modelling approach needs to be formulated. This is partly what we have done with archetypes, but that's only an early start. With multi-level modelling principles in place, it is unavoidable that Computer Science has to engage with specific domains in a completely different way; it can't deal with them via 'use case analysis' or other vague and superficial means. Instead, a proper methodology is needed for expressing domain processes and information so that domain specific models can be created that can be consumed by software instead of strangling it.



In sum, Computer Science needs to recognise that each domain - health, finance, etc - requires its own science, its own formal models, and its own theories. And that to achieve this, proper (meta-)methods are required.



APPLIED INFORMATICS



From the point of view of an applied computing department, such as health informatics or financial IT, the starting point is to use something like the schema shown at the top of this post to define the programme of research. Realistically, it would probably mean numerous universities developing a common model of applied informatics research for the domain in question, and teaching and signing up to that model.



In health it means e.g. examining how clinical medicine is executed, and coming up with a proper model of the patient / clinician interaction over time, of the course disease, or decision making processes, of 'treatment' versus diagnosis, and numerous other things. Proper models of these are needed, and the models need to be part of an overall body of theory.



CONCLUSION



This post is already far too long, so I'll finish with just a simple conclusion: I don't there there is any serious domain of enquiry, no matter how narrow or seemingly applied that doesn't merit proper science (formalism, models, theories, explanations), and serious domain of development that doesn't merit proper engineering (knowledge of the built artefact, understanding of the deployment context).
Full Post

No comments:

Post a Comment