### Refine

#### Year of publication

- 1999 (525) (remove)

#### Document Type

- Preprint (397)
- Article (73)
- Doctoral Thesis (28)
- Course Material (6)
- Master's Thesis (6)
- Report (5)
- Lecture (3)
- Study Thesis (3)
- Working Paper (2)
- Diploma Thesis (1)
- Periodical Part (1)

#### Keywords

- Case-Based Reasoning (11)
- AG-RESY (6)
- Praktikum (6)
- Fallbasiertes Schliessen (5)
- HANDFLEX (5)
- Location Theory (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)

#### Faculty / Organisational entity

- Fachbereich Informatik (267)
- Fachbereich Mathematik (131)
- Fachbereich Physik (76)
- Fachbereich Chemie (19)
- Fachbereich Elektrotechnik und Informationstechnik (10)
- Fachbereich ARUBI (6)
- Fraunhofer (ITWM) (6)
- Fachbereich Wirtschaftswissenschaften (5)
- Fachbereich Biologie (2)
- Fachbereich Maschinenbau und Verfahrenstechnik (2)

Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.

In this paper we give the definition of a solution concept in multicriteria combinatorial optimization. We show how Pareto, max-ordering and lexicographically optimal solutions can be incorporated in this framework. Furthermore we state some properties of lexicographic max-ordering solutions, which combine features of these three kinds of optimal solutions. Two of these properties, which are desirable from a decision maker" s point of view, are satisfied if and only of the solution concept is that of lexicographic max-ordering.

A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.

Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic differences of terms to guide the proof search. Annotations (colors) to terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed lambda calculus with color annotations and present an appropriate (pre-)unification algorithm. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification.

This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.

This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.

The concept of the Virtual Software Corporation ( VSC) has recently become a practical reality as a result of advances in communication and distributed technologies. However, there are significant difficulties with the management of the software development process within a VSC. The main problem is the significantly increased communicational complexity of the process model for such developments. The more classic managerial hierarchy is generally replaced by a "flatter" network of commitments. Therefore new solution approaches are required to provide the necessary process support. The purpose of this paper is to present a solution approach which models the process based on deontic logic. The approach has been validated against a case study where it was used to model commitments and inter-human communications within the software development process of a VSC. The use of the formalism is exemplified through a prototype system using a layered multi-agent architecture.

Many discrepancy principles are known for choosing the parameter \(\alpha\) in the regularized operator equation \((T^*T+ \alpha I)x_\alpha^\delta = T^*y^\delta\), \(||y-y^d||\leq \delta\), in order to approximate the minimal norm least-squares solution of the operator equation \(Tx=y\). In this paper we consider a class of discrepancy principles for choosing the regularization parameter when \(T^*T\) and \(T^*y^\delta\) are approximated by \(A_n\) and \(z_n^\delta\) respectively with \(A_n\) not necessarily self - adjoint. Thisprocedure generalizes the work of Engl and Neubauer (1985),and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).

This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.

Compared to conventional techniques in computational fluid dynamics, the lattice Boltzmann method (LBM) seems to be a completely different approach to solve the incompressible Navier-Stokes equations. The aim of this article is to correct this impression by showing the close relation of LBM to two standard methods: relaxation schemes and explicit finite difference discretizations. As a side effect, new starting points for a discretization of the incompressible Navier-Stokes equations are obtained.

Information technology support for complex, dynamic, and distributed business processes as they occur in engineering domains requires an advanced process management system which enhances currently available workflow management services with respect to integration, flexibility, and adapt ation. We present an uniform and flexible framework for advanced process management on an a bstract level which uses and adapts agent technology from distributed artificial intelligence for both modelling and enacting of processes. We identify two different frameworks for applying agent tec hnology to process management: First, as a multi-agent system with the domain of process manag ement. Second, as a key infrastructure technology for building a process management system. We will then follow the latter approach and introduce different agent types for managing activities, products, and resources which capture specific views on the process.

In continous location problems we are given a set of existing facilities and we are looking for the location of one or several new facilities. In the classical approaches weights are assigned to existing facilities expressing the importance of the new facilities for the existing ones. In this paper, we consider a pointwise defined objective function where the weights are assigned to the existing facilities depending on the location of the new facility. This approach is shown to be a generalization of the median, center and centdian objective functions. In addition, this approach allows to formulate completely new location models. Efficient algorithms as well as structure results for this algebraic approach for location problems are presented. Extensions to the multifacility and restricted case are also considered.

This paper describes the architecture and concept of operation of a Framework for Adaptive Process Modeling and Execution (FAME). The research addresses the absence of robust methods for supporting the software process management life cycle. FAME employs a novel, model-based approach in providing automated support for different activities in the software development life cycle including project definition, process design, process analysis, process enactment, process execution status monitoring, and execution status-triggered process redesign. FAME applications extend beyond the software development domain to areas such as agile manufacturing, project management, logistics planning, and business process reengineering.

Facility Location Problems are concerned with the optimal location of one or several new facilities, with respect to a set of existing ones. The objectives involve the distance between new and existing facilities, usually a weighted sum or weighted maximum. Since the various stakeholders (decision makers) will have different opinions of the importance of the existing facilities, a multicriteria problem with several sets of weights, and thus several objectives, arises. In our approach, we assume the decision makers to make only fuzzy comparisons of the different existing facilities. A geometric mean method is used to obtain the fuzzy weights for each facility and each decision maker. The resulting multicriteria facility location problem is solved using fuzzy techniques again. We prove that the final compromise solution is weakly Pareto optimal and Pareto optimal, if it is unique, or under certain assumptions on the estimates of the Nadir point. A numerical example is considered to illustrate the methodology.

A General Hilbert Space Approach to Wavelets and Its Application in Geopotential Determination
(1999)

A general approach to wavelets is presented within a framework of a separable functional Hilbert space H. Basic tool is the construction of H-product kernels by use of Fourier analysis with respect to an orthonormal basis in H. Scaling function and wavelet are defined in terms of H-product kernels. Wavelets are shown to be 'building blocks' that decorrelate the data. A pyramid scheme provides fast computation. Finally, the determination of the earth's gravitational potential from single and multipole expressions is organized as an example of wavelet approximation in Hilbert space structure.

We present an entropy concept measuring quantum localization in dynamical systems based on time averaged probability densities. The suggested entropy concept is a generalization of a recently introduced [PRL 75, 326 (1995)] phase-space entropy to any representation chosen according to the system and the physical question under consideration. In this paper we inspect the main characteristics of the entropy and the relation to other measures of localization. In particular the classical correspondence is discussed and the statistical properties are evaluated within the framework of random vector theory. In this way we show that the suggested entropy is a suitable method to detect quantum localization phenomena in dynamical systems.

In this paper we consider the problem of optimizing a piecewise-linear objective function over a non-convex domain. In particular we do not allow the solution to lie in the interior of a prespecified region R. We discuss the geometrical properties of this problems and present algorithms based on combinatorial arguments. In addition we show how we can construct quite complicated shaped sets R while maintaining the combinatorial properties.

We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.

Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".

A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.

We present a mathematical knowledge base containing the factual know-ledge of the first of three parts of a textbook on semi-groups and automata,namely "P. Deussen: Halbgruppen und Automaten". Like almost all math-ematical textbooks this textbook is not self-contained, but there are somealgebraic and set-theoretical concepts not being explained. These concepts areadded to the knowledge base. Furthermore there is knowledge about the nat-ural numbers, which is formalized following the first paragraph of "E. Landau:Grundlagen der Analysis".The data base is written in a sorted higher-order logic, a variant of POST ,the working language of the proof development environment OmegaGamma mkrp. We dis-tinguish three different types of knowledge: axioms, definitions, and theorems.Up to now, there are only 2 axioms (natural numbers and cardinality), 149definitions (like that for a semi-group), and 165 theorems. The consistency ofsuch knowledge bases cannot be proved in general, but inconsistencies may beimported only by the axioms. Definitions and theorems should not lead to anyinconsistency since definitions form conservative extensions and theorems areproved to be consequences.

This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.

The relation between the Lattice Boltzmann Method, which has re- cently become popular, and the Kinetic Schemes, which are routinely used in Computational Fluid Dynamics, is explored. A new discrete velocity model for the numerical solution of Navier-Stokes equations for incom- pressible uid ow is presented by combining both the approaches. The new scheme can be interpreted as a pseudo-compressibility method and, for a particular choice of parameters, this interpretation carries over to the Lattice Boltzmann Method.

Compared to standard numerical methods for hyperbolic systems of conservation laws, Kinetic Schemes model propagation of information by particles instead of waves. In this article, the wave and the particle concept are shown to be closely related. Moreover, a general approach to the construction of Kinetic Schemes for hyperbolic conservation laws is given which summarizes several approaches discussed by other authors. The approach also demonstrates why Kinetic Schemes are particularly well suited for scalar conservation laws and why extensions to general systems are less natural.

The problem of finding an optimal location X* minimizing the maximum Euclidean distance to existing facilities is well solved by e.g. the Elzinga-Hearn algorithm. In practical situations X* will however often not be feasible. We therefore suggest in this note a polynomial algorithm which will find an optimal location X^F in a feasible subset F of the plane R^2

The problem of providing connectivity for a collection of applications is largely one of data integration: the communicating parties must agree on thesemantics and syntax of the data being exchanged. In earlier papers [#!mp:jsc1!#,#!sg:BSG1!#], it was proposed that dictionaries of definitions foroperators, functions, and symbolic constants can effectively address the problem of semantic data integration. In this paper we extend that earlier work todiscuss the important issues in data integration at the syntactic level and propose a set of solutions that are both general, supporting a wide range of dataobjects with typing information, and efficient, supporting fast transmission and parsing.

We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.

We consider a multiple objective linear program (MOLP) max{Cx|Ax = b,x in N_{0}^{n}} where C = (c_ij) is the p x n - matrix of p different objective functions z_i(x) = c_{i1}x_1 + ... + c_{in}x_n , i = 1,...,p and A is the m x n - matrix of a system of m linear equations a_{k1}x_1 + ... + a_{kn}x_n = b_k , k=1,...,m which form the set of constraints of the problem. All coefficients are assumed to be natural numbers or zero. The set M of admissable solutions {hat x} is an admissible solution such that there exists no other admissable solution x' with C{hat x} Cx'. The efficient solutions play the role of optimal solutions for the MOLP and it is our aim to determine the set of all efficient solutions

We investigate one of the classical problems of the theory ofterm rewriting, namely termination. We present an ordering for compar-ing higher-order terms that can be utilized for testing termination anddecreasingness of higher-order conditional term rewriting systems. Theordering relies on a first-order interpretation of higher-order terms anda suitable extension of the RPO.

In this paper we consider the problem of locating one new facility in the plane with respect to a given set of existing facility where a set of polygonal barriers restricts traveling. This non-convex optimization problem can be reduced to a finite set of convex subproblems if the objective function is a convex function of the travel distances between the new and the existing facilities (like e.g. the Median and Center objective functions). An exact Algorithm and a heuristic solution procedure based on this reduction result are developed.

The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.

A compact subset E of the complex plane is called removable if all bounded analytic functions on its complement are constant or, equivalently, i f its analytic capacity vanishes. The problem of finding a geometric characterization of the removable sets is more than a hundred years old and still not comp letely solved.

In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.

The asymptotic behaviour of a singular-perturbed two-phase Stefan problem due to slow diffusion in one of the two phases is investigated. In the limit the model equations reduce to a one-phase Stefan problem. A boundary layer at the moving interface makes it necessary to use a corrected interface condition obtained from matched asymptotic expansions. The approach is validated by numerical experiments using a front-tracking method.

This report presents the properties of a specification of the domain of process planning for rotary symmetrical workpieces. The specification results from a model for problem solving in this domain that involves different reasoners, one of which is an AI planner that achieves goals corresponding to machining workpieces by considering certain operational restrictions of the domain. When planning with SNLP (McAllester and Rosenblitt, 1991), we will show that the resulting plans have the property of minimizing the use of certain key operations. Further, we will show that, for elastic protected plans (Kambhampati et al., 1996) such as the ones produced by SNLP, the goals corresponding to machining parts of a workpiece are OE-constrained trivial serializable, a special form of trivial serializability (Barrett and Weld, 1994). However, we will show that planning with SNLP in this domain can be very difficult: elastic protected plans for machining parts of a workpiece are nonmergeable. Finally, we will show that, for sufix, prefix or sufix and prefix plans such as the ones produced by state-space planners, it is not possible to have both properties, being OEconstrained trivial serializable and minimizing the use of the key operations, at the same time.

Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.

A Tailored Real Time Temporal Logic for Specifying Requirements of Building Automation Systems
(1999)

A tailored real time temporal logic for specifying requirements of building automation systems is introduced and analyzed. The logic features several new real time operators, which are chosen with regard to the application area. The new operators improve the conciseness and readability of requirements as compared to a general-purpose real time temporal logic. In addition, some of the operators also enhance the expressiveness of the logic. A number of properties of the new operators are presented and proven.

Concept mapping is a simple and intuitive visual form of knowledge representation. Concept maps can be categorized as informal or formal, where the latter is characterized by implementing a semantics model constraining their components. Software engineering is a domain that has successfully adopted formal concept maps to visualize and specify complex systems. Automated tools have been implemented to support these models although their semantic constraints are hardcoded within the systems and hidden from users. This paper presents the Constraint Graphs and jKSImapper systems. Constraint Graphs is a flexible and powerful graphical system interface for specifying concept mapping notations. In addition, jKSImapper is a multi-user concept mapping editor for the Internet and the World Wide Web. Together, these systems aim to support user-definable formal concept mapping notations and distributed collaboration on the Internet and the World Wide Web.

A novel method is presented which allows a fast computation of complex energy resonance states in Stark systems, i.e. systems in a homogeneous field. The technique is based on the truncation of a shift-operator in momentum space. Numerical results for space periodic and non-periodic systems illustrate the extreme simplicity of the method.

In this paper we introduce a new type of single facility location problems on networks which includes as special cases most of the classical criteria in the literature. Structural results as well as a finite dominationg set for the optimal locations are developed. Also the extension to the multi-facility case is discussed.

The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.

We present a way to describe Reason Maintenance Systems using the sameformalism for justification based as well as for assumption based approaches.This formalism uses labelled formulae and thus is a special case of Gabbay'slabelled deductive systems. Since our approach is logic based, we are able toget a semantics oriented description of the systems in question.Instead of restricting ourselves to e.g. propositional Horn formulae, as wasdone in the past, we admit arbitrary logics. This enables us to characterizesystems as a whole, including both the reason maintenance component and theproblem solver, nevertheless maintaining a separation between the basic logicand the part that describes the label propagation. The possibility to freely varythe basic logic enables us to not only describe various existing systems, but canhelp in the design of completely new ones.We also show, that it is possible to implement systems based directly on ourlabelled logic and plead for "incremental calculi" crafted to attack undecidablelogics.Furthermore it is shown that the same approach can be used to handledefault reasoning, if the propositional labels are upgraded to first order.

Caching has long been used to reduce average access latency, from registers and memory pages cached by hardware, to the application level such as a web browser retaining retrieved documents. We focus here on the high-level caching of potentially shared networked documents and define two terms in relation to this type of caching: Zero latency refers to the condition where access to a document produces a cache hit on the local machine, that is, there is little or no latency due to the network (we assume that latency due to local disk and memory access is insignificant in comparison to network latency). A document with zero latency usually has been placed in the cache after a previous access, or has been pulled there through some prefetching mechanism. Negative latency refers to automatic presentation, or push, of a document to a user based on a prediction that the user will want that document. With an ideal system, a user would be presented with documents either that she was about to request, or that she would not know to request but that would be immediately useful to her.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

The reasoning power of human-oriented plan-based reasoning systems is primarilyderived from their domain-specific problem solving knowledge. Such knowledge is, how-ever, intrinsically incomplete. In order to model the human ability of adapting existingmethods to new situations we present in this work a declarative approach for represent-ing methods, which can be adapted by so-called meta-methods. Since apparently thesuccess of this approach relies on the existence of general and strong meta-methods,we describe several meta-methods of general interest in detail by presenting the prob-lem solving process of two familiar classes of mathematical problems. These examplesshould illustrate our philosophy of proof planning as well: besides planning with thecurrent repertoire of methods, the repertoire of methods evolves with experience inthat new ones are created by meta-methods which modify existing ones.

INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.

In this paper we generalize the notion of method for proofplanning. While we adopt the general structure of methods introducedby Alan Bundy, we make an essential advancement in that we strictlyseparate the declarative knowledge from the procedural knowledge. Thischange of paradigm not only leads to representations easier to under-stand, it also enables modeling the important activity of formulatingmeta-methods, that is, operators that adapt the declarative part of exist-ing methods to suit novel situations. Thus this change of representationleads to a considerably strengthened planning mechanism.After presenting our declarative approach towards methods we describethe basic proof planning process with these. Then we define the notion ofmeta-method, provide an overview of practical examples and illustratehow meta-methods can be integrated into the planning process.

Extending the planADbased paradigm for auto-mated theorem proving, we developed in previ-ous work a declarative approach towards rep-resenting methods in a proof planning frame-work to support their mechanical modification.This paper presents a detailed study of a classof particular methods, embodying variations ofa mathematical technique called diagonaliza-tion. The purpose of this paper is mainly two-fold. First we demonstrate that typical math-ematical methods can be represented in ourframework in a natural way. Second we illus-trate our philosophy of proof planning: besidesplanning with a fixed repertoire of methods,metaADmethods create new methods by modify-ing existing ones. With the help of three differ-ent diagonalization problems we present an ex-ample trace protocol of the evolution of meth-ods: an initial method is extracted from a par-ticular successful proof. This initial method isthen reformulated for the subsequent problems,and more general methods can be obtained byabstracting existing methods. Finally we comeup with a fairly abstract method capable ofdealing with all the three problems, since it cap-tures the very key idea of diagonalization.

Accelerating the maturation process within the software engineering discipline may result in boosts of development productivity. One way to enable this acceleration is to develop tools and processes to mimic evolution of traditional engineering disciplines. Principles established in traditional engineering disciplines represent high-level guidance to constructing these tools and processes. This paper discusses two principles found in the traditional engineering disciplines and how these principles can apply to mature the software engineering discipline. The discussion is concretized through description of the Collaborative Management Environment, a software system under collaborative development among several national laboratories.

Integrated project management means that design and planning are interleaved with plan execution, allowing both the design and plan to be changed as necessary. This requires that the right effects of change are propagated through the plan and design. When this is distributed among designers and planners, no one may have all of the information to perform such propagation and it is important to identify what effects should be propagated to whom when. We describe a set of dependencies among plan and design elements that allow such notification by a set of message-passing software agents. The result is to provide a novel level of computer support for complex projects.

Algorithmic ideal theory
(1999)

Algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Non-trivial mathematical problems usually lead to non-trivial Gröbner basis computations, which is the reason why several improvements and efficient implementations have been provided by algebraic geometers (for example, the systems CoCoA, Macaulay and SINGULAR). The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gröbner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. In the third chapter several algorithms for primary decomposition of polynomial ideals are presented, together with a discussion of improvements and preferable choices. We also describe a newly invented algorithm for computing the normalization of a reduced affine ring. The last chapter gives an elementary introduction to singularity theory and then describes algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gröbner basis have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only a few references to papers which contain interesting applications and which are not mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.

Algorithms in Singular
(1999)

This paper presents a new kind of abstraction, which has been developed for the purpose of proofplanning. The basic idea of this paper is to abstract a given theorem and to find an abstractproof of it. Once an abstract proof has been found, this proof has to be refined to a real proofof the original theorem. We present a goal oriented abstraction for the purpose of equality proofplanning, which is parameterized by common parts of the left- and right-hand sides of the givenequality. Therefore, this abstraction technique provides an abstract equality problem which ismore adequate than those generated by the abstractions known so far. The presented abstractionalso supports the heuristic search process based on the difference reduction paradigm. We give aformal definition of the abstract space including the objects and their manipulation. Furthermore,we prove some properties in order to allow an efficient implementation of the presented abstraction.

There are two general approaches to providing for isochronous streams in the current Internet. The first approach is the resource reservation approach through protocols such as RSVP, or ATM technology. This provides bandwidth guarantees, however, it also requires significant upgrading of resources in the underlying network. The other common approach is adaptive rate control where the end-system has control of its rate according to feedback from the client population. This approach cannot guarantee timely delivery and raises some scaling questions, however a properly implemented scheme does improve quality and it requires no changes to the underlying IP network. Hence, there exists a dichotomy of requirements ; 1. To cater for reservation protocols or 'hooks' for future reservation components, and 2. To provide an architecture which provides an application controlled QoS scheme, which scales to the size of the current Internet in a best- effort architecture.

An agent-based approach to managing distributed, multi-platform software development projects
(1999)

This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project which aims to develop the Prompter tool, a 'decision-support tool to assist in the planning and managing of a software development project'. Prompter will have the ability to help software project managers to assimilate best practice and 'know how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on Prompters agent- based approach to tackling the problems of distributed, platform independent support.

In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.

We propose an approach to the problem of proof control for our new first-order inductive theorem prover QuodLibet that is characterized by a great deal of flexibility w.r.t. the forms of proof control the prover supports. The approach is based on so-called (proof) tactics, i.e. proof control routines written in a special proof control language named QML. QuodLibet provides a set of tactics (in addition to the elementary inference rules), which range from tactics for trivial simplification steps to tactics representing comprehensive inductive proof strategies. Moreover, QuodLibet allows new tactics that are written by the user in QML to be integrated into the system to dynamically extend its functionality.

We describe a technique to make application programs fault tolerant. This techADnique is based on the concept of checkpointing from an active program to one ormore passive backup copies which serve as an abstraction of stable memory. Ifthe primary copy fails, one of the backup copies takes over and resumes processADing service requests. After each failure a new backup copy is created in order torestore the replication degree of the service. All mechanisms necessary to achieveand maintain fault tolerance can be added automatically to the code of a nonADfaulttolerant server, thus making fault tolerance completely transparent for the applicaADtion programmer.

An interrupter for use in a daisy-chained VME bus interrupt system has beendesigned and implemented as an asynchronous sequential circuit. The concur-rency of the processes posed a design problem that was solved by means of asystematic design procedure that uses Petri nets for specifying system and in-terrupter behaviour, and for deriving a primitive flow table. Classical designand additional measures to cope with non-fundamental mode operation yieldeda coded state-machine representation. This was implemented on a GAL 22V10,chosen for its hazard-preventing structure and for rapid prototyping in studentlaboratories.

In recent years the demand on business process modelling (BPM) became apparent in many different communities, e.g. information systems engineering, requirements engineering [KiB94], software engineering and knowledge engineering (e.g. [BrV94], [SWH+94]). This suggests to aim at a unifying view on business process modelling in all these disciplines. To achieve the business goals some problems which obstruct these goals must be solved. This can be done either by restructuring the business process, by application of standard software, or by developing individual software components such as knowledge based systems (KBSs). To be able to model business goals and to analyse problems occurring during the business processes these processes including organisational structures and activities have to be modelled. This is also true when building a KBS in an enterprise environment. Because the KBS is only a small part of the whole business organisation, it must be embedded into or at least linked to all relevant business processes, i.e. it should not be a stand-alone solution. For this purpose we extend the MIKE approach [AFS96] in the BMBF project WORKS (Work Oriented Design of Knowledge Systems) by offering business models for modelling relevant aspects of an enterprise. To be able to define an integrated framework with other possibilties to improve an enterprise (e.g. information systems engineering) we determine the standard views of an enterprise. Next we define the views, that are necessary for developing a KBS.

The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.

Mechanised reasoning systems and computer algebra systems have apparentlydifferent objectives. Their integration is, however, highly desirable, since in manyformal proofs both of the two different tasks, proving and calculating, have to beperformed. Even more importantly, proof and computation are often interwoven andnot easily separable. In the context of producing reliable proofs, the question howto ensure correctness when integrating a computer algebra system into a mechanisedreasoning system is crucial. In this contribution, we discuss the correctness prob-lems that arise from such an integration and advocate an approach in which thecalculations of the computer algebra system are checked at the calculus level of themechanised reasoning system. This can be achieved by adding a verbose mode to thecomputer algebra system which produces high-level protocol information that can beprocessed by an interface to derive proof plans. Such a proof plan in turn can beexpanded to proofs at different levels of abstraction, so the approach is well-suited forproducing a high-level verbalised explication as well as for a low-level machine check-able calculus-level proof. We present an implementation of our ideas and exemplifythem using an automatically solved extended example.

In this paper we consider generalizations of multifacility location problems in which as an additional constraint the new facilities are not allowed to be located in a presprcified region. We propose several different solution schemes for this non-convex optimization problem. These include a linear programming type approach, penalty approaches and barrier approaches. Moreover, structural results as well as illustratrive examples showing the difficulties of this problem are presented

The increasing use of distributed computer systems leads to an increasingneed for distributed applications. Their development in various domains like of-fice automation or computer integrated manufacturing is not sufficiently sup-ported by current techniques. New software engineering concepts are needed inthe three areas 'languages', 'tools', and 'environments'. We believe that object-oriented techniques and graphics support are key approaches to major achieve-ments in all three areas. As a consequence, we developed a universal object-oriented graphical editor ODE as one of our basic tools (tool building tool).ODE is based on the object-oriented paradigm, with some important extensionslike built-in object relations. It has an extensible functional language which al-lows for customization of the editor. ODE was developed as part of DOCASE, asoftware production environment for distributed applications. The basic ideas ofDOCASE will be presented and the requirements for ODE will be pointed out.Then ODE will be described in detail, followed by a sample customization ofODE: the one for the DOCASE design language.

Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.

Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.

This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.

Analogy in CLAM
(1999)

CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the source plan methods hold inthe target as a permission to transfer the method to the target plan. SinceCL A M has very efficient heuristic search strategies, the main purpose ofthe analogy is to suggest lemmas, to replay not commonly loaded methods,to suggest induction variables and induction terms, and to override controlrather than to construct a target proof plan that can be built by CL A Mitself more efficiently.

The amount of user interaction is the prime cause of costs in interactiveprogram verification. This paper describes an internal analogy techniquethat reuses subproofs in the verification of state-based specifications. Itidentifies common patterns of subproofs and their justifications in orderto reuse these subproofs; thus significant savings on the number of userinteractions in a verification proof are achievable.

Many mathematical proofs are hard to generate forhumans and even harder for automated theoremprovers. Classical techniques of automated theoremproving involve the application of basic rules, of built-in special procedures, or of tactics. Melis (Melis 1993)introduced a new method for analogical reasoning inautomated theorem proving. In this paper we showhow the derivational analogy replay method is relatedand extended to encompass analogy-driven proof planconstruction. The method is evaluated by showing theproof plan generation of the Pumping Lemma for con-text free languages derived by analogy with the proofplan of the Pumping Lemma for regular languages.This is an impressive evaluation test for the analogicalreasoning method applied to automated theorem prov-ing, as the automated proof of this Pumping Lemmais beyond the capabilities of any of the current auto-mated theorem provers.

Annual Report 1998
(1999)

In dieser Arbeit wurde die Induktion von Apoptose durch vier a,b-ungesättigte Aldehyde und ein zyklisches Keton untersucht. Die Apoptoseinduktion wurde mittels 2-Parameter-Durchflußzytometrie (DNA-/Proteinmessung), Fluoreszenzmikroskopie und DNA-Isolation bestimmt. Es wurden dazu Konzentrationskinetiken (0 bis 150microM) und Zeitkinetiken (1h-Inkubation mit anschließender Postinkubation bis 24h bzw.72h bei Molt4; 0h bis 24h bei 2-Cyclohexen-1-on) an den humanen Zellinien U937, HL-60, K562 und Molt4 aufgestellt. Bei den Zellinien U937 und HL-60 konnte die induzierte Apoptose sowohl quantitativ (durchflußzytometrisch und morphologisch) als auch qualitativ (DNA-Leiter) eindeutig bestimmt werden. Für alle Verbindungen wurde eine dosis- und zeitabhängige Induktion von Apoptose detektiert. Darüberhinaus lieferte die morphologische Analyse Informationen über die Verteilung der einzelnen Apoptosestadien. Die Zellinien K562 und Molt4 zeigten keine so deutliche Übereinstimmung zwischen den 3 Detektionsmethoden. Für die Zellinie K562 ergab sich in den eingesetzten Konzentrationsbereichen nur bei 2-Cyclohexen-1-on durchflußzytometrisch eine dosis- und zeitabhängige Apoptoseinduktion, die sich zwar mit der Analyse der DNA-Leiter, nicht aber mit der morphologischen Auswertung bestätigen ließ. Für trans-2-Hexenal und trans-2-Octenal konnte nur zeitabhängig ein Anstieg mittels FCM detektiert werden, der nur für trans-2-Octenal signifikant war. Bei beiden ergab sich keine DNA-Leiter. Die morphologische Analyse für trans-2-Hexenal ließ einen ebenfalls nicht signifikanten Anstieg erkennen. Für trans-2-Octenal wurde keine morphologische Analyse durchgeführt. Die Zellinie Molt4 zeigte nur zeitabhängig für trans-2-Hexenal, 2-trans,4-trans-Hexadien-1-al und 2-trans,6-cis-Nonadienal eine Apoptoseinduktion, die ausschließlich für 2-trans,4-trans-Hexadien-1-al signifikant war. Die Analyse der DNA-Leiter war auch hier in allen Fällen negativ. Die morphologische Analyse konnte für 2-Cyclohexen-1-on zusätzliche, im FCM nicht detektierte Apoptose zeigen. Der Vergleich der 1-Parameter-Messung des DNA-Gehaltes mit der 2-Parameter-Messung des DNA- und Proteingehaltes der Zellinie U937 bei trans-2-Hexenal- und 2-Cyclohexen-1-on-Behandlung ergab zusätzliche, bei der 1-Parameter-Messung ?versteckte" apoptotische Zellen. Diese waren eindeutig bestimmten Zellzyklusphasen zuzuordnen. Ein weiterer Vergleich der DNA-/Proteinmessung mit der als spezifischste Methode geltenden Markierung der Strangbrüche mit fluoreszenzmarkierten Nukleotiden wurde durchgeführt. Dazu wurden Beispiele aus der Literatur eingesetzt. Dieser Vergleich zeigte, daß bei einem geeigneten Zellsystem die zellzyklusspezifische Apoptosebestimmung vergleichbar war. Von Vorteil für die DNA-/Proteinmessung war dabei die einfache und kostengünstige Durchführung.Der zur Überprüfung der Diskrepanzen zwischen den Detektionsmethoden bei den Zellinien K562 und Molt4 eingesetzte Annexin V-Assay bot in beiden Fällen keine weitere Information. Die eingesetzten Methoden konnten also bei den Zellinien U937 und HL-60 in guter Übereinstimmung Apoptose detektieren, die Zellinien K562 und Molt4 zeigten sich aufgrund fehlender Fragmentierung der DNA in 180bp-Fragmente und der relativen Apoptoseresistenz von K562 für dieses Screening als ungeeignet. Für diese Zellinien müßten weitere Methoden, die nicht auf der Fragmentierung der DNA beruhen, eingesetzt werden.Aufgrund der Tatsache, daß die Apoptoseinduktion bei den Zellinien K562 und Molt4 erst nach längerer Inkubationsdauer auftrat, wurde diese als verzögerte Apoptose interpretiert. Im Vergleich dazu handelte es sich bei den Zellinien U937 und HL-60 um frühe Apoptose. Setzt man die Apoptoseinduktion mit der Phasenspezifität der Verbindungen in Beziehung, so kann man für die Zellinien U937 und HL-60 nach Inkubation mit den untersuchten Aldehyden von Homo-Zyklus-Apoptose, mit 2-Cyclohexen-1-on von Homo-Phase-Apoptose sprechen, obwohl letztere durchflußzytometrisch erst nach 12h in Erscheinung trat. Für die Zellinien K562 und Molt4 handelte es sich bei allen Verbindungen um post-mitotische-Apoptose. Die Aldehyde zeigten bezüglich ihrer Apoptoseinduktion bei U937 und HL-60 keine Phasenspezifität, es waren alle Zellzyklusphasen betroffen. Bei K562 zeigten sie G1-Phasenspezifität, bei Molt4 G1- und teilweise G2-Phasenspezifität. 2-Cyclohexen-1-on ergab S-Phasenspezifität bei U937 und HL-60 sowie G1-Phasenspezifität bei K562. Außerden trat im Zeitverlauf für trans-2-Hexenal, 2-trans,4-trans-Hexadien-1-al, trans-2-Octenal und 2-Cyclohexen-1-on ein G2/M-Arrest auf, 2-trans,6-cis-Nonadienal erzeugte einen G1-Arrest.Der Einfluß der Kettenlänge und Anzahl der Doppelbindungen auf das apoptoseinduzierende Potential wurde an den Zellinien U937 und HL-60 für trans-2-Hexenal, 2-trans,4-trans-Hexadien-1-al, trans-2-Octenal und 2-trans,6-cis-Nonadienal untersucht. Dabei ergab sich ein substanzabhängiger Anstieg der Apoptoseinduktion mit steigender Kettenlänge, der aber nicht in allen Fällen signifikant war. Der Einfluß der Doppelbindungen konnte nur im direkten Vergleich von trans-2-Hexenal und 2-trans,4-trans-Hexadien-1-al betrachtet werden. Hier trat kein signifikanter Unterschied zwischen den beiden Verbindungen auf, was wahrscheinlich auf die Konjugation der Doppelbindungen und die damit verbundene erhöhte Polarität und gleichzeitig erniedrigte Lipophilie zurückzuführen war.

We present a new software architecture in which all concepts necessary to achieve fault tolerance can be added to an appli- cation automatically without any source code changes. As a case study, we consider the problem of providing a reliable service despite node failures by executing a group of replicat- ed servers. Replica creation and management as well as fail- ure detection and recovery are performed automatically by a separate fault tolerance layer (ft-layer) which is inserted be- tween the server application and the operating system kernel. The layer is invisible for the application since it provides the same functional interface as the operating system kernel, thus making the fault tolerance property of the service completely transparent for the application. A major advantage of our ar- chitecture is that the layer encapsulates both fault tolerance mechanisms and policies. This allows for maximum flexibility in the choice of appropriate methods for fault tolerance with- out any changes in the application code.

We give a comparison of various differential cross-section models for a classical polyatomic gas for a homogeneous relaxation problem and the shock wave profiles at Mach numbers 1.71 and 12.9. Besides the standard Borgnakke-Larsen model and its generalizations to an energy dependent coefficient to control the amnount of rotationally elastic and completely inelastic collisions, we discuss some new models recently proposed by the same authors. Moreover, we present numerical algorithms to implement the models in a particle or Monte-Carlo code and compare the numerical shock wave profiles with existing experimental data.

In this paper we show that for each prime p=7 there exists a translation plane of order p^2 of Mason-Ostrom type. These planes occur as 6-dimensional ovoids being projections of the 8-dimensional binary ovoids of Conway, Kleidman and Wilson. In order to verify the existence of such projections we prove certain properties of two particular quadratic forms using classical methods form number theory.

The computational complexity of combinatorial multiple objective programming problems is investigated. NP-completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well known tree and Christofides' heuristics for the TSP is investigated in the multicriteria case with respect to the two definitions of approximability.

Static and dynamic properties of patterned magnetic permalloy films are investigated. In square lattices of circular shaped permalloy dots an anisotropic coupling mechanism has been found, which is identified as being due to intrinsically unsaturated parts of the dots caused by spatial variations of demagnetizing field. In arrays of magnetic wires a quantization of the surface spin wave mode in several dispersionless modes is observed and quantitatively described. For large wavevectors the frequency separation between the modes becomes smaller and the frequencies converge to the dispersion of the dipole-exchange surface mode of a continuous film.

Das System ART (ASF RRL Translation) stellt im wesentlichen eine Umgebung dar,in welcher die Modularisierbarkeit von Beweisen (Induktionsbeweisen über Gleichungs-spezifikationen) untersucht werden kann. Es wurde die bereits bestehende Spezifikati-onsprache ASF (siehe [BeHeKl89]), in welcher modularisierte Spezifikationen möglichsind, so erweitert, daß zusätzlich auch Beweisaufgaben spezifiziert werden können. Imfolgenden wird diese erweiterte Spezifikationsprache auch ASF genannt. Als Bewei-ser für die Beweisaufgaben einer Spezifikation wurde RRL (siehe [KaZh89]) gewählt.RRL kann sowohl Kommandos aus einem File abarbeiten, wie auch Sitzungsprotokolleanfertigen, mit deren Hilfe sich die Beweisverläufe und Benutzereingaben der entspre-chenden RRL-Sitzung rekonstruieren lassen. In ART kann nun eine ASF-Spezifikation,die Beweisaufgaben umfassen kann, in ein File übersetzt werden, welches von RRLabgearbeitet werden kann. Dies wird im folgenden kurz mit 'Übersetzung von ASF nach RRL' bezeichnet. Bei der Abarbeitung eines solchen Files wird von RRL ein Sit-zungsprotokoll angelegt. ART kann dieses Sitzungsprotokoll dazu heranziehen, neueErgebnisse, wie etwa den erfolgreichen Beweis einer Beweisaufgabe, zu ermitteln, umdiese Ergebnisse der ursprüngliche Spezifikation hinzuzufügen. Dies wird im folgendenkurz mit 'Rückübersetzung von RRL nach ASF' bezeichnet. Im Kern besteht ART alsoaus einer Komponente zur Übersetzung von ASF nach RRL und aus einer Komponentezur Rückübersetzung von RRL nach ASF.

In nebenläufigen Systemen erleichtert das Konzept der Atomarität vonOperationen, konkurrierende Zugriffe in größere, leichter beherrschbareAbschnitte zu unterteilen. Wenn wir aber Spezifikationen in der forma-len Beschreibungstechnik Estelle betrachten, erweist es sich, daß es un-ter bestimmten Umständen schwierig ist, die Atomarität der sogenanntenTransitionen bei Implementationen exakt einzuhalten, obwohl diese Ato-marität eine konzeptuelle Grundlage der Semantik von Estelle ist. Es wirdaufgezeigt, wie trotzdem sowohl korrekte als auch effiziente nebenläufigeImplementationen erreicht werden können. Schließlich wird darauf hinge-wiesen, daß die das Problem auslösenden Aktionen oft vom Spezifiziererleicht von vorneherein vermieden werden können; und dies gilt auch überden Kontext von Estelle hinaus.

Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)

The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.

Independent development of system components may cause integration problems if their interaction is faulty. This problem may be solved by enforcing required component interactions at the system level. We have developed a system that automatically integrates control-oriented components, to make them consistent with aggregate system behavior re- quirements. Ourmethod is based on the automated synchronization method that modifies independently designed compo-nents to make them satisfy a set of user defined receptive safety properties. The automated synchroniza-tion allows us to design the compo nents as independent controllers that satisfy their individual requirements and to compose a correct executable system by combining the components and enforcing their interaction constraints. This approach gives component designers the freedom to design independently, and produce a functional system by combining the components and specifying their interaction requirements.

Today's communication systems are typically structured into several layers, where each layer realizes a fixed set of protocol functionalities. These functionalities have been carefully chosen such that a wide range of applications can be supported and protocols work in a general environment of networks. However, due to evolving network technologies as well as increased and varying demands of modern applications general-purpose protocol stacks are not always adequate. To improve this situation new flexible communication architectures have been developed which enable the configuration of customized communication subsystems by composing a proper set of reusable building blocks. In particular, several approaches to automatic configuration of communication subsystems have been reported in the literature. This report gives an overview of theses approaches (F-CCS, Da CaPo, x-Kernel, and ADAPTIVE) and, in particular, defines a framework, which identifies common architectural issues and configuration tasks.

Im Bereich des Software Engineering werden komplexe Software-Entwicklungsprojekte betrachtet. Im Rahmen dieser Projekte werden große Mengen von Informationen bearbeitet. Diese Informationen werden in Software-Artefakten (z.B. in Projektplänen oder Entwicklungsdokumenten, wie Anforderungsbeschreibungen)
festgehalten. Die Artefakte werden während der Entwicklung und der Wartung eines Softwaresystems häufig geändert. Änderungen einer Information in einem Artefakt haben häufig Änderungen
im selben und in anderen Artefakten zur Folge, da Beziehungen innerhalb und zwischen den in den Artefakten festgehaltenen Informationen bestehen. Die Beziehungen liegen meist nicht explizit vor, so daß die Konsequenzen einer Änderung schwer zu überblicken sind. In dieser Arbeit wurde ein Verfolgbarkeitsansatz ausgewählt, der den Benutzer bei der Durchführung von Änderungen an Artefakten unterstützt. Unterstützung bedeutet hierbei, daß der Aufwand zur Durchführung einer Änderung reduziert wird und weniger Fehler bei der Durchführung gemacht werden.
In der Arbeit wurden Anforderungen an einen auszuwählenden Verfolgbarkeitsansatz gestellt. Eine Anforderung war, daß er auf verschiedene Bereiche des Software Engineering, wie z.B. Systementwurf oder Meßplanung, mit jeweils sehr unterschiedlichen Artefakten, anwendbar sein sollte. Die durchgeführte
Literaturrecherche und die anschließende Bewertung anhand der gestellten Anforderungen ergaben, daß das Prinzip der Metamodellierung in Verbindung mit Wissensbankverwaltungssystemen ein geeigneter Verfolgbarkeitsansatz ist. Eine Evaluation, die sich auf Fallstudien aus den Bereichen
"Objektorientierter Entwurf mit UML" und "Meßplanung mit GQM" bezog, ergab, daß das Wissensbankverwaltungssystem
ConceptBase, das auf der Wissensrepräsentationssprache 0-Telos basiert, ein geeignetes Werkzeug zur Unterstützung des Verfolgbarkeitsansatzes ist.

Bestimmung der Ähnlichkeit in der fallbasierten Diagnose mit simulationsfähigen Maschinenmodellen
(1999)

Eine Fallbasis mit bereits gelösten Diagnoseproblemen Wissen über die Struktur der Maschine Wissen über die Funktion der einzelnen Bauteile (konkret und abstrakt) Die hier vorgestellte Komponente setzt dabei auf die im Rahmen des Moltke-Projektes entwickelten Systeme Patdex[Wes91] (fallbasierte Diagnose) und iMake [Sch92] bzw. Make [Reh91] (modellbasierte Generierung von Moltke- Wissensbasen) auf.

Beyond the Thouless energy
(1999)

Abstract: The distribution and the correlations of the small eigenvalues of the Dirac operator are described by random matrix theory (RMT) up to the Thouless energy E_= 1 / sqrt (V), where V is the physical volume. For somewhat larger energies, the same quantities can be described by chiral perturbation theory (chPT). For most quantities there is an intermediate energy regime, roughly 1/V < E < 1/sqrt (V), where the results of RMT and chPT agree with each other. We test these predictions by constructing the connected and disconnected scalar susceptibilities from Dirac spectra obtained in quenched SU(2) and SU(3) simulations with staggered fermions for a variety of lattice sizes and coupling constants. In deriving the predictions of chPT, it is important totake into account only those symmetries which are exactly realized on the lattice.

High frequency switching of single domain, uniaxial magnetic particles is discussed in terms of transition rates controlled by a small transverse bias field. It is shown that fast switching times can be achieved using bias fields an order of magnitude smaller than the effective anisotropy field. Analytical expressions for the switching time are derived in special cases and general configurations of practical interest are examined using numerical simulations.