
Transformational programming is a methodology which supports the process of constructing a program that meets a given specification. Starting with a formal problem definition, an algorithm to solve the problem is derived (formally) by applying correctness-preserving transformation rules in a step-by-step fashion. In this way, the resulting program is not only guaranteed (by construction) to meet its specification, but also will satisfy further constraints depending on an appropriate choice of rules. For the practical use of transformational program development, a suitable set of rules plays a crucial role. In the past neither the 'catalog approach' (i.e. huge sets of compact, all-purpose rules) nor the 'generative set approach' (i.e. a small, relatively complete set of basic rules out of which further rules can be deduced) have turned out to be really practicable. Therefore, it seems to be an obvious compromise to base on a small set of powerful basic rules and to derive compact rules for particular classes of problems. The paper investigates the feasibility of this compromise, i.e. given a particular problem domain, to find out methodological principles (rules, tactics, strategies) that are needed to derive algorithms typical for the respectivel domain. According to its basic intention, the paper consists of two main parts. The first part deals with compact rules and strategies for the development of algorithms in connection with arbitrary deduction systems. The focus of interest here is the transition from a given descriptive (i.e. non-operational) specification to an operational solution. The techniques investigated comprise embedding (i.e. generalization) of specifications, a general strategy for deriving initial recursive solutions from descriptive specifications (following the known 'unfold/fold strategy'), a strategy for deriving algorithms that compute partial functions from solutions for the restricting predicate, compact rules for finite sets (e.g. for computing closures) which are one of the central data structures in the chosen problem domain, elimination of existential quantifiers and manipulations of applicative programs (inversion, combination, and composition) in order to increase efficiency. In the second part these techniques are applied to typical problems from the area of context-free grammars (as typical representative of the chosen problem domain) such as recognition and parsing (including not only a variety of parsing strategies, but also different representations of the parsing structure), and manipulation of grammars (e.g. elimination of empty productions and left recursion, or construction of parser tables).
formal specification, Specification and verification (program logics, model checking, etc.), transformation rule, General topics in the theory of software, compact rules for finite sets, parsing, Symbolic computation and algebraic computation, programming methodology, development strategy, elimination of existential quantifiers, manipulation of grammars, Transformational programming, manipulations of applicative programs, embedding, transformational program development, context-free grammars, recognition, Algorithms in computer science, Software
formal specification, Specification and verification (program logics, model checking, etc.), transformation rule, General topics in the theory of software, compact rules for finite sets, parsing, Symbolic computation and algebraic computation, programming methodology, development strategy, elimination of existential quantifiers, manipulation of grammars, Transformational programming, manipulations of applicative programs, embedding, transformational program development, context-free grammars, recognition, Algorithms in computer science, Software
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 8 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
