# Stanford Computer Science Department Technical Reports from the 1980

### The authors have given permission to make their technical reports available on this server. If a report was published in print and is not here it may be that the author published it elsewhere.

Report Number: CS-TR-80-768
Institution: Stanford University, Department of Computer Science
Title: Causal nets or what is a deterministic computation
Author: Gacs, Peter
Author: Levin, Leonid A.
Date: October 1980
Abstract: We introduce the concept of causal nets - it can be considered as the most general and elementary concept of the history of a deterministic computation (sequential or parallel). Causality and locality are distinguished as the only important properties of nets representing such records. Different types of complexities of computations correspond to different geometrical characteristics of the corresponding causal nets - which have the advantage of being finite objects. Synchrony becomes a relative notion. Nets can have symmetries; therefore it will make sense to ask what can be computed from arbitrary symmetric inputs. Here, we obtain a complete group-theoretical characterization of the kind of symmetries that can be allowed in parallel computations.
CS-TR-80-768

Report Number: CS-TR-80-779
Institution: Stanford University, Department of Computer Science
Title: Problematic features of programming languages: a situational-calculus approach
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: September 1980
Abstract: Certain features of programming languages, such as data structure operations and procedure call mechanisms, have been found to resist formalization by classical techniques. An alternate approach is presented, based on a "situational calculus," which makes explicit reference to the states of a computation. For each state, a distinction is drawn between an expression, its value, and the location of the value. Within this conceptual framework, the features of a programming language can be described axiomatically. Programs in the language can then be synthesized, executed, verified, or transformed by performing deductions in this axiomatic system. Properties of entire classes of programs, and of programming languages, can also be expressed and proved in this way. The approach is amenable to machine implementation. In a situational-calculus formalism it is possible to model precisely many "problematic" features of programming langauges, including operations on such data structures as arrays, pointers, lists, and records, and such procedure call mechanisms as call-by-reference, call-by-value, and call-by-name. No particular obstacle is presented by aliasing between variables, by declarations, or by recursive procedures. The paper is divided into three parts, focusing respectively on the assignment statement, on data structure operations, and on procedure call mechanisms. In this first part, we introduce the conceptual framework to be applied throughout and present the axiomatic definition of the assignment statement. If suitable restrictions on the programming language are imposed, the well-known Hoare assignment axiom can then be proved as a theorem. However, our definition can also describe the assignment statement of unrestricted programming languages, for which the Hoare axiom does not hold.
CS-TR-80-779

Report Number: CS-TR-80-780
Institution: Stanford University, Department of Computer Science
Title: The Computer Modern family of typefaces
Author: Knuth, Donald E.
Date: January 1980
Abstract: This report gives machine-independent definitions of all the styles of type planned for use in future editions of "The Art of Computer Programming." Its main purpose is to provide a detailed example of a complete family of font definitions using METAFONT, so that people who want new symbols for their own books and papers will understand how to incorporate them easily. The fonts are intended to have the same spirit as those used in earlier editions of "The Art of Computer Programming," but each character has been redesigned and defined in the METAFONT idiom. It is hoped that some readers will be inspired to make similar definitions of other important familites of fonts. The bulk of this report consists of about 400 short METAFONT programs for the various symbols needed, and as such it is pretty boring, but there are some nice illustrations.
CS-TR-80-780

Report Number: CS-TR-80-785
Institution: Stanford University, Department of Computer Science
Title: Equations and rewrite rules: a survey
Author: Huet, Gerard
Author: Oppen, Derek C.
Date: January 1980
Abstract: Equations occur frequently in mathematics, logic and computer science. In this paper, we survey the main results concerning equations, and the methods available for reasoning about them and computing with them. The survey is self-contained and unified, using traditional abstract algebra. Reasoning about equations may involve deciding if an equation follows from a given set of equations (axioms), or if an equation is true in a given theory. When used in this manner, equations state properties that hold between objects. Equations may also be used as definitions; this use is well known in computer science: programs written in applicative languages, abstract interpreter definitions, and algebraic data type definitions are clearly of this nature. When these equations are regarded as oriented "rewrite rules," we may actually use them to compute. In addition to covering these topics, we discuss the problem of "solving" equations (the "unification" problem), the problem of proving termination of sets of rewrite rules, and the decidability and complexity of word problems and of combinations of equational theories. We restrict ourselves to first-order equations, and do not treat equations which define non-terminating computations or recent work on rewrite rules applied to equational congruence classes.
CS-TR-80-785

Report Number: CS-TR-80-786
Institution: Stanford University, Department of Computer Science
Title: Algorithms in modern mathematics and computer science
Author: Knuth, Donald E.
Date: January 1980
Abstract: The life and work of the ninth century scientist al-Khwarizmi, "the father of algebra and algorithms," is surveyed briefly. Then a random sampling technique is used in an attempt to better understand the kinds of thinking that good mathematicians and computer scientists do and to analyze whether such thinking is significantly "algorithmic" in nature. (This is the text of a talk given at the opening session of a symposium on "Algorithms in Modern Mathamatics and Computer Science" held in Urgench, Khorezm Oblast', Uzbek S.S.R., during the week of September 16-22, 1979.)
CS-TR-80-786

Report Number: CS-TR-80-788
Institution: Stanford University, Department of Computer Science
Title: Circumscription - a form of non-monotonic reasoning
Author: McCarthy, John
Date: February 1980
Abstract: Humans and intelligent computer programs must often jump to the conclusion that the objects they can determine to have certain properties or relations are the only objects that do. Circumscripion formalizes such conjectural reasoning.
CS-TR-80-788

Report Number: CS-TR-80-789
Institution: Stanford University, Department of Computer Science
Title: ADA exceptions: specification and proof techniques
Author: Luckham, David C.
Author: Polak, Wolfgang
Date: February 1980
Abstract: A method of documenting exception propagation and handling in Ada programs is proposed. Exception propagation declarations are introduced as a new component of Ada specifications. This permits documentation of those exceptions that can be propagated by a subprogram. Exception handlers are documented by entry assertions. Axioms and proof rules for Ada exceptions are given. These rules are simple extensions of previous rules for Pascal and define an axiomatic semantics of Ada exceptions. As a result, Ada programs specified according to the method can be analysed by formal proof techniques for consistency with their specifications, even if they employ exception propagation and handling to achieve required results (i.e. non-error situations). Example verifications are given.
CS-TR-80-789

Report Number: CS-TR-80-790
Institution: Stanford University, Department of Computer Science
Title: Databases in healthcare
Author: Wiederhold, Gio
Date: March 1980
Abstract: This report defines database design and implementation technology as applicable to healthcare. The relationship of technology to various healthcare settings is explored, and the effectiveness on healthcare costs, quality and access is evaluated. A summary of relevant development directions is included. Detailed examples of 5 typical clinical applications (public health, clinical trials, clinical research, ambulatory care, and hospitals) are appended. There is an extended bibliography.
CS-TR-80-790

Report Number: CS-TR-80-792
Institution: Stanford University, Department of Computer Science
Title: MAINSAIL implementation overview
Author: Wilcox, Clark R.
Author: Dageforde, Mary L.
Author: Jirak, Gregory A.
Date: March 1980
Abstract: The MAINSAIL programming language and the supporting implementations have been developed over the past five years as an integrated approach to a viable machine-independent system suitable for the development of large, portable programs. Particular emphasis has been placed on minimizing the effort involved in moving the system to a new machine and/or operating system. For this reason, almost all of the compiler and runtime support is written in MAINSAIL, and is utilized in each implementation without alteration. This use of a high-level language to support its own implementation has proved to be a significant advantage in terms of documentation and maintenance, without unduly affecting the execution speed. This paper gives an overview of the compiler and runtime implementation strategies, and indicates what an implementation requires for the machine-dependent and operating-system-dependent parts.
CS-TR-80-792

Report Number: CS-TR-80-794
Institution: Stanford University, Department of Computer Science
Title: Recent developments in the complexity of combinatorial algorithms
Author: Tarjan, Robert Endre
Date: June 1980
Abstract: The last three years have witnessed several major advances in the area of combinatorial algorithms. These include improved algorithms for matrix multiplication and maximum network flow, a polynomial-time algorithm for linear programming, and steps toward a polynomial-time algorithm for graph isomorphism. This paper surveys these results and suggests directions for future research. Included is a discussion of recent work by the author and his students on dynamic dictionaries, network flow problems, and related questions.
CS-TR-80-794

Report Number: CS-TR-80-796
Institution: Stanford University, Department of Computer Science
Title: Essential E
Author: Samuel, Arthur L.
Date: March 1980
Abstract: This is an introductory manual describing the display-oriented text editor E that is available on the Stanford A.I. Laboratory PDP-10 computer. The present manual is intended to be used as an aid for the beginner as well as for experienced computer users who either are unfamiliar with the E editor or use it infrequently. Reference is made to the two on-line manuals that help the beginner to get started and that provide a complete description of the editor for the experienced user. E is commonly used for writing computer programs and for preparing reports and memoranda. It is not a document editor, although it does provide some facilities for getting a document into a pleasing format. The primary emphasis is that of speed, both in terms of the number of key strokes required of the user and in terms of the demands made on the computer system. At the same time, E is easy to learn and it offers a large range of facilities that are not available on many editors.
CS-TR-80-796

Report Number: CS-TR-80-797
Institution: Stanford University, Department of Computer Science
Title: Read-only transactions in a distributed database
Author: Garcia-Molina, Hector
Author: Wiederhold, Gio
Date: April 1980
Abstract: A read-only transaction or query is a transaction which does not modify any data. Read-only transactions could be processed with general transaction processing algorithms, but in many cases it is more efficient to process read-only transactions with special algorithms which take advantage of the knowledge that the transaction only reads. This paper defines the various consistency and currency requirements that read-only transactions may have. The processing of the different classes of read-only transactions in a distributed database is discussed. The concept of R insularity is introduced to characterize both the read-only and update algorithms. Several simple update and read-only transaction processing algorithms are presented to illustrate how the query requirements and the update algorithms affect the read-only transaction processing algorithms.
CS-TR-80-797

Report Number: CS-TR-80-799
Institution: Stanford University, Department of Computer Science
Author: Friedman, Jerome H.
Author: Grosse, Eric
Author: Stuetzle, Werner
Date: May 1980
Abstract: We describe an adaptive procedure that approximates a function of many variables by a sum of (univariate) spline functions $s_m$ of selected linear combinations $a_m \cdot x$ of the coordinates $\theta (x) = \sum_{1\le m\le M} s_m (a_m \cdot x)$. The procedure is nonlinear in that not only the spline coefficients but also the linear combinations are optimized for the particular problem. The sample need not lie on a regular grid, and the approximation is affine invariant, smooth, and lends itself to graphical interpretation. Function values, derivatives, and integrals are cheap to evaluate.
CS-TR-80-799

Report Number: CS-TR-80-807
Institution: Stanford University, Department of Computer Science
Title: Path-regular graphs
Author: Matula, David W.
Author: Dolev, Danny
Date: June 1980
Abstract: A graph is vertex-[edge-]path-regular if a list of shortest paths, allowing multiple copies of paths, exists where every pair of vertices are the endvertices of the same number of paths and each vertex [edge] occurs in the same number of paths of the list. The dependencies and independencies between the various path-regularity, regularity of degree, and symmetry properties are investigated. We show that every connected vertex-[edge-]symmetric graph is vertex-[edge-]path-regular, but not conversely. We show that the product of any two vertex-path-regular graphs is vertex-path-regular but not conversely, and the iterated product G x G x ... x G is edge-path-regular if and only if G is edge-path-regular. An interpretation of path-regular graphs is given regarding the efficient design of concurrent communication networks.
CS-TR-80-807

Report Number: CS-TR-80-808
Institution: Stanford University, Department of Computer Science
Title: Final report: Basic Research in Artificial Intelligence and Foundations of Programming
Author: McCarthy, John
Author: Binford, Thomas O.
Author: Luckham, David C.
Author: Manna, Z ohar
Author: Weyhrauch, Richard W.
Author: Earnest, Les
Date: May 1980
Abstract: Recent research results are reviewed in the areas of formal reasoning, mathematical theory of computation, program verification, and image understanding.
CS-TR-80-808

Report Number: CS-TR-80-811
Institution: Stanford University, Department of Computer Science
Title: An extended semantic definition of Pascal for proving the absence of common runtime errors
Author: German, Steven M.
Date: June 1980
Abstract: We present an axiomatic definition of Pascal which is the logical basis of the Runcheck system, a working verifier for proving the absence of runtime errors such as arlthmetic overflow, array subscripting out of range, and accessing an uninitialized variable. Such errors cannot be detected at compile time by most compilers. Because the occurrence of a runtime error may depend on the values of data supplied to a program, techniques for assuring the absence of errors must be based on program specifications. Runcheck accepts Pascal programs documented with assertions, and proves that the specifications are consistent with the program and that no runtime errors can occur. Our axiomatic definition is similar to Hoare's axiom system, but it takes into account certain restrictions that have not been considered in previous definitions. For instance, our definition accurately models uninitialized variables, and requires a variable to have a well defined value before it can be accessed. The logical problems of introducing the concept of uninitialized variables are discussed. Our definition of expression evaluation deals more fully with function calls than previous axiomatic definitions. Some generalizations of our semantics are presented, including a new method for verifying programs with procedure and function parameters. Our semantics can be easily adopted to similar languages, such as ADA. One of the main potential problems for the user of a verifier is the need to write detailed, repetitious assertions. We develop some simple logical properties of our definition which are exploited by Runcheck to reduce the need for such detailed assertions.
CS-TR-80-811

Report Number: CS-TR-80-821
Institution: Stanford University, Department of Computer Science
Title: Semiantichains and unichain coverings in direct products of partial orders
Author: West, Douglas B.
Author: Tovey, Craig A.
Date: September 1980
Abstract: We conjecture a generalization of Dilworth's theorem to direct products of partial orders. In particular, we conjecture that the largest "semiantichain" and the smallest "unichain covering" have the same size. We consider a special class of semiantichains and unichain coverings and determine when equality holds for them. This conjecture implies the existence of k-saturated partitions. A stronger conjecture, for which we also prove a special case, implies the Greene-Kleitman result on simultaneous k and (k + 1)-saturated partitions.
CS-TR-80-821

Report Number: CS-TR-80-824
Institution: Stanford University, Department of Computer Science
Title: LCCD, a language for Chinese character design
Author: Mei, Tung Yun
Date: October 1980
Abstract: LCCD is a computer system able to produce aesthetically pleasing Chinese characters for use on raster-oriented printing devices. It is analogous to METAFONT, in that the user writes a little program that explains how to draw each character; but it uses different types of simulated 'pens' that are more appropriate to the Chinese idiom, and it includes special scaling features so that a complex character can easily be built up from simpler ones, in an interactive manner. This report contains a user's manual for LCCD, together with many illustrative examples and a discussion of the algorithms used.
CS-TR-80-824

Report Number: CS-TR-80-826
Institution: Stanford University, Department of Computer Science
Title: A database approach to communication in VLSI design
Author: Wiederhold, Gio
Author: Beetem, Anne
Author: Short, Garrett
Date: October 1980
Abstract: This paper describes recent and planned work at Stanford in applying database technology to the problems of VLSI design. In particular, it addresses the issue of communication within a design's different representations and hierarchical levels in a multiple designer environment. We demonstrate the heretofore questioned utility of using commercial database systems, at least while developing a versatile, flexible, and generally efficient model and its associated communication paths. Completed work and results from initial work using DEC DBMS-20 is presented, including macro expansion within the database, and signalling of changes to higher structural levels. Considerable discussion regarding overall philosophy for continued work is also included.
CS-TR-80-826

Report Number: CS-TR-80-827
Institution: Stanford University, Department of Computer Science
Title: On the parallel computation for the knapsack problem
Author: Yao, Andrew Chi-Chih
Date: November 1980
Abstract: We are interested in the complexity of solving the knapsack problem with n input real numbers on a parallel computer with real arithmetic and branching operations. A processor-time tradeoff constraint is derived; in particular, it is shown that an exponential number of processors have to be used if the problem is to be solved in time $t \le {\sqrt{n}}/2$.
CS-TR-80-827

Report Number: CS-TR-80-829
Institution: Stanford University, Department of Computer Science
Title: The dinner table problem
Author: Aspvall, Bengt
Author: Liang, Frank M.
Date: December 1980
Abstract: This report contains two papers inspired by the "dinner table problem": If n people are seated randomly around a circular table for two meals, what is the probability that no two people sit together at both meals? We show that this probability approaches $e^{-2}$ as $n \rightarrow \infty$, and also give a closed form. We then observe that in many similar problems on permutations with restricted position, the number of permutations satisfying a given number of properties is approximately Poisson distributed. We generalize our asymptotic argument to prove such a limit theorem, and mention applications to the problems of derangements, menages, and the asymptotic number of Latin rectangles.
CS-TR-80-829

Report Number: CS-TR-80-830
Institution: Stanford University, Department of Computer Science
Title: Two linear-time algorithms for five-coloring a planar graph
Author: Matula, David W.
Author: Shiloach, Yossi
Author: Tarjan, Robert E.
Date: November 1980
Abstract: A "sequential processing" algorithm using bicolor interchange that five-colors an n vertex planar graph in $O(n^2)$ time was given by Matula, Marble, and Isaacson [1972]. Lipton and Miller used a "batch processing" algorithm with bicolor interchange for the same problem and achieved an improved O(n log n) time bound [1978]. In this paper we use graph contraction arguments instead of bicolor interchange and improve both the sequential processing and batch processing methods to obtain five-coloring algorithms that operate in O(n) time.
CS-TR-80-830

Report Number: CS-TR-80-832
Institution: Stanford University, Department of Computer Science
Title: Scheduling wide graphs
Author: Dolev, Danny
Date: December 1980
Abstract: The problem of scheduling a partially ordered set of unit length tasks on m identical processors is known to be NP-complete. There are efficient algorithms for only a few special cases of this problem. In this paper we explore the relations between the structure of the precedence graph (the partial order) and optimal schedules. We prove that in finding an optimal schedule for certain systems it suffices to consider at each step high roots which belong to at most the m-1 highest components of the precedence graph. This result reduces the number of cases we have to check during the construction of an optimal schedule. Our method may lead to the development of linear scheduling algorithms for many practical cases and to better bounds for complex algorithms. In particular, in the case the precedence graph contains only inforest and outforest components, this result leads to efficient algorithms for obtaining an optimal schedule on two or three processors.
CS-TR-80-832

Report Number: CS-TR-80-850
Institution: Stanford University, Department of Computer Science
Title: Performing remote operations efficiently on a local computer network
Author: Spector, Alfred Z .
Date: December 1980
Abstract: This paper presents a communication model for local networks, whereby processes execute generalized remote references that cause operations to be performed by remote processes. This remote reference/remote operation model provides a taxonomy of primitives that (1) are naturally useful in many applications and (2) can be efficiently implemented. The motivation for this work is our desire to develop systems architectures for local network based multiprocessors that support distributed applications requiring frequent interprocessor communication. After a section containing a brief overview, Section 2 of this paper discusses the remote reference/remote operation model. In it, we derive a set of remote reference types that can be supported by a communication system carefully integrated with the local network interface. The third section exemplifies a communication system that provides one remote reference type. These references (i.e., remote load, store, compare-and-swap, enqueue, and dequeue) take about 150 microseconds, or 50 average instruction times, to perform on Xerox Alto computers connected by a 2.94 megabit Ethernet. The last section summarizes this work and proposes a complete implementation resulting in a highly efficient communication system.
CS-TR-80-850

Report Number: CS-TR-81-836
Institution: Stanford University, Department of Computer Science
Title: Verification of concurrent programs, Part I: The temporal framework
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: June 1981
Abstract: This is the first in a series of reports describing the application of temporal logic to the specification and verification of concurrent programs. We first introduce temporal logic as a tool for reasoning about sequences of states. Models of concurrent programs based both on transition graphs and on linear-text representations are presented and the notions of concurrent and fair executions are defined. The general temporal language is then specialized to reason aboaut those execution sequences that are fair computations of a concurrent program. Subsequently, the language is used to describe properties of concurrent programs. The set of interesting properties is classified into invariance (safety), eventuality (liveness), and precedence (until) properties. Among the properties studied are: partial correctness, global invariance, clean behavior, mutual exclusion, absence of deadlock, termination, total correctness, intermittent assertions, accessibility, responsiveness, safe liveness, absence of unsolicited response, fair responsiveness, and precedence. In the following reports of this series, we will use the temporal formalism to develop proof methodologies for proving the properties discussed here.
CS-TR-81-836

Report Number: CS-TR-81-837
Institution: Stanford University, Department of Computer Science
Title: Research on expert systems
Author: Buchanan, Bruce G.
Date: March 1981
Abstract: All AI programs are essentially reasoning programs. And, to the extent that they reason well about a problem area, all exhibit some expertise at problem solving. Programs that solve the Tower of Hanoi puzzle, for example, reason about the goal state and the initial state in order to find "expert-level" solutions. Unlike other programs, however, the claims about expert systems are related to questions of usefulness and understandability as well as performance.
CS-TR-81-837

Report Number: CS-TR-81-838
Institution: Stanford University, Department of Computer Science
Title: Dynamic program building
Author: Brown, Peter
Date: February 1981
Abstract: This report argues that programs are better regarded as dynamic running objects rather than as static textual ones. The concept of dynamic building, whereby a program is constructed as it runs, is described. The report then describes the Build system, which is an implementation of dynamic building for an interactive algebraic programming language. Dynamic building aids the locating of run-time errors, and is especially valuable in environments where programs are relatively short but run-time errors are frequent and/or costly.
CS-TR-81-838

Report Number: CS-TR-81-839
Institution: Stanford University, Department of Computer Science
Title: Short WAITS
Author: Samuel, Arthur L.
Date: February 1981
Abstract: This is an introductory manual describing the SU-AI timesharing system that is available primarily for sponsored research in the Computer Science Department. The present manual is written for the beginner and the user interested primarily in the message handling capability as well as for the experienced computer user and programmer who either is unfamiliar with the SU-AI computer or who uses it infrequently. References are made to the available hard-copy manuals and to the extensive on-line document files where more complete information can be obtained. The principal advantages of this system are: 1) The availability of a large repertoire of useful system features; 2) The large memory; 3) The large file storage system; 4) The ease with which one can access other computers via the ARPA net; 5) The file transfer facilities via the EFTP program and the ETHERNET; 6) The XGP and the DOVER printers and the large collections of fonts available for them; and 7) The fast and convenient E editor with its macro facilities.
CS-TR-81-839

Report Number: CS-TR-81-846
Institution: Stanford University, Department of Computer Science
Title: The Byzantine Generals strike again
Author: Dolev, Danny
Date: March 1981
Abstract: Can unanimity be achieved in an unreliable distributed system? This problem was named "The Byzantine Generals Problem," by Lamport, Pease and Shostak [1980]. The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is: 1) less than one third of the total number of processors; and 2) less than one half of the connectivity of the system's network. In cases where unanimity is achievable, algorithms to obtain it are given. This result forms a complete characterization of networks in light of the Byzantine Problem.
CS-TR-81-846

Report Number: CS-TR-81-847
Institution: Stanford University, Department of Computer Science
Title: The optimal locking problem in a directed acyclic graph
Author: Korth, Henry F.
Date: March 1981
Abstract: We assume a multiple granularity database locking scheme similar to that of Gray, et al. [197S] in which a rooted directed acyclic graph is used to represent the levels of granularity. We prove that even if it is known in advance exactly what database references the transaction will make, it is NP-complete to find the optimal locking strategy for the transaction.
CS-TR-81-847

Report Number: CS-TR-81-848
Institution: Stanford University, Department of Computer Science
Title: On the problem of inputting Chinese characters
Author: Tang, Chih-sung
Date: April 1981
Abstract: If Chinese-speaking society is to make the best use of computers, it is important to develop an easy, quick, and convenient way to input Chinese characters together with other conventional characters. Many people have tried to approach this problem by designing special typewriters for Chinese character input, but such methods have serious deficiencies and they do not take advantage of the fact that the input process is just part of a larger system in which a powerful computer lies behind the keyboard. The purpose of this note is to clarify the problem and to illustrate a promising solution based entirely on a standard ASCII keyboard.
CS-TR-81-848

Report Number: CS-TR-81-849
Institution: Stanford University, Department of Computer Science
Title: Experiments on the Knee Criterion in a multiprogrammed computer system
Author: Nishigaki, Tohru
Date: March 1981
Abstract: Although the effectiveness of the Knee Criterion as a virtual memory management strategy is widely accepted, it has been impossible to take advantage of it in a practical system, because little information is available about the program behavior of executing jobs. A new memory management technique to achieve the Knee Criterion in a multiprogrammed virtual memory system is developed. The technique, termed the Optimum Working-set Estimator (OWE), abstracts the programs' behavior from their past histories by exponential smoothing, and modifies their working set window sizes in order to attain the Knee Criterion. The OWE method was implemented and investigated. Measurements demonstrate its ability to control a variety of jobs. Furthermore, the results also reveal that the throughput improvement is possible in a space-squeezing environment. This technique is expected to increase the efficiency of multiprogrammed virtual memory systems.
CS-TR-81-849

Report Number: CS-TR-81-851
Institution: Stanford University, Department of Computer Science
Title: Binding in information processing
Author: Wiederhold, Gio
Date: May 1981
Abstract: The concept of binding, as used in programming systems, is analyzed and defined in a number of contexts. The attributes of variables to be bound and the phases of binding are enumerated. The definition is then broadened to cover general issues in information systems. Its applicability is demonstrated in a wide range of system design and implementation issues. A number of Database Management Systems are categorized according to the terms defined. A first-order quantitative model is developed and compared with current practice. The concepts and the model are considered helpful when used as a tool for the global design phase of large information systems.
CS-TR-81-851

Report Number: CS-TR-81-854
Institution: Stanford University, Department of Computer Science
Title: On the security of public key protocols
Author: Dolev, Danny
Author: Yao, Andrew C.
Date: May 1981
Abstract: Recently, the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. In this paper we formulate several models in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models will be given.
CS-TR-81-854

Report Number: CS-TR-81-863
Institution: Stanford University, Department of Computer Science
Title: A programming and problem-solving seminar
Author: Knuth, Donald E.
Author: Miller, Allan A.
Date: June 1981
Abstract: This report contains a record of the autumn 1980 session of CS 204, a problem-solving and programming seminar taught at Stanford that is primarily intended for first-year Ph.D. students. The seminar covers a large range of topics, research paradigms, and programming paradigms in computer science, so these notes will be of interest to graduate students, professors, and professional computer scientists.
CS-TR-81-863

Report Number: CS-TR-81-865
Institution: Stanford University, Department of Computer Science
Title: Toward a unified logical basis for programming languages
Author: Tang, Chih-sung
Date: June 1981
Abstract: In recent years, more and more computer scientists have been paying attention to temporal logic, since there are many properties of programs that can be described only by bringing the time parameter into consideration. But existing temporal logic languages, such as Lucid, in spite of their mathematical elegance, are still far from practical. I believe that a practical temporal-logic language, once it came into being, would have a wide spectrum of applications. XYZ /E is a temporal-logic language. Like other logic languages, it is a logic system as well as a programming language. But unlike them, it can express all conventional data structures and control structures, nondeterminate or concurrent programs, even programs with branching-time order. We find that the difficulties met in other logic languages often stem from the fact that they try to deal with these structures in a higher level. XYZ /E adopts another approach. We divide the language into two forms: the internal form and the external form. The former is lower level, while the latter is higher. Just as any logic system contains rules of abbreviation, so also in XYZ /E there are rules of abbreviation to transform the internal form into the external form, and vice versa. These two forms can be considered to be different representations of the same thing. We find that this approach can ameliorate many problems of formalization.
CS-TR-81-865

Report Number: CS-TR-81-867
Institution: Stanford University, Department of Computer Science
Author: Luckham, David C.
Author: Larsen, Howard J.
Author: Stevenson, David R.
Author: Henke, Friedrich W. von
Date: July 1981
CS-TR-81-867

Report Number: CS-TR-81-868
Institution: Stanford University, Department of Computer Science
Title: The Last Whole Errata Catalog
Author: Knuth, Donald E.
Date: July 1981
Abstract: This list supplements previous errata published in Stanford reports CS551 (1976) and CS712 (1979). It includes the first corrections and changes to the second edition of volume two (published January, 1981) as well as to the most recent printings of volumes one and three (first published in 1975). In addition to the errors listed here, about half of the occurrences of 'which' in volumes one and three should be changed to 'that'.
CS-TR-81-868

Report Number: CS-TR-81-869
Institution: Stanford University, Department of Computer Science
Title: Computer Science comprehensive examinations, 1978/79-1980/81
Author: Tajnai, Carolyn E.
Date: August 1981
Abstract: The Stanford Computer Science Comprehensive Examination was conceived Spring Quarter 1971/72 and since then has been given winter and spring quarters each year. The 'Comp' serves several purposes in the department. There are no course requirements in the Ph.D. and the Ph.D. Minor programs, and only one (CS293, Computer Laboratory) in the Master's program. Therefore, the 'Comp' fulfills the breadth and depth requirements. The Ph.D. Minor and Master's student must pass at the Master's level to be eligible for the degree. For the Ph.D. student it serves as a "Rite of Passage"; the exam must be passed at the Ph.D. level by the end of six quarters of full-time study (excluding summers) for the student to continue in the program. This report is a collection of comprehensive examinations from Winter Quarter 1978/79 through Spring Quarter 1980/81.
CS-TR-81-869

Report Number: CS-TR-81-871
Institution: Stanford University, Department of Computer Science
Title: Good layouts for pattern recognizers
Author: Trickey, Howard W.
Date: August 1981
Abstract: A system to lay out custom circuits that recognize regular languages can be a useful VLSI design automation tool. This paper describes the algorithms used in an implementation of a regular expression compiler. Layouts that use a network of programmable logic arrays (PLA's) have smaller areas than those of some other methods, but there are the problems of partitioning the circuit and then placing the individual PLA's. Regular expressions have a structure which allows a novel solution to these problems: dynamic programming can be used to find layouts which are in some sense optimal. Various search pruning heuristics have been used to increase thc speed of the compiler, and the experience with these is reported in the conclusions.
CS-TR-81-871

Report Number: CS-TR-81-875
Institution: Stanford University, Department of Computer Science
Title: Computation of matrix chain products: Part I, Part II
Author: Hu, T. C.
Author: Shing, M. T.
Date: September 1981
Abstract: This paper considers the computation of matrix chain products of the form $M_1 x M_2 x ... x M_{n-1}$. If the matrices are of different dimensions, the order in which the product is computed affects the number of operations. An optimum order is an order which minimizes the total number of operations. Some theorems about an optimum order of computing the matrices are presented in part I. Based on these theorems, an O(n log n) algorithm for finding an optimum order is presented in part II.
CS-TR-81-875

Report Number: CS-TR-81-876
Institution: Stanford University, Department of Computer Science
Title: On linear area embedding of planar graphs
Author: Dolev, Danny
Author: Trickey, Howard W.
Date: September 1981
Abstract: Planar embedding with minimal area of graphs on an integer grid is one of the major issues in VLSI. Valiant [1981] gave an algorithm to construct a planar embedding for trees in linear area; he also proved that there are planar graphs that require quadratic area. We give an algorithm to embed outerplanar graphs in linear area. We extend this algorithm to work for every planar graph that has the following property: for every vertex there exists a path of length less than K to the exterior face, where K is a constant. Finally, finding a minimal embedding area is shown to be NP-complete for forests, and hence for more general types of graphs.
CS-TR-81-876

Report Number: CS-TR-81-879
Institution: Stanford University, Department of Computer Science
Title: Interlisp-VAX: a report
Author: Masinter, Larry M.
Date: August 1981
Abstract: This report documents the results of a study to evaluate the feasibility of implementing the Interlisp language to run on the DEC VAX computer. Specific goals of the study were to: 1) assess the technical status of the on-going implementation project at USC-ISI; 2) estimate the expected performance of Interlisp on the VAX famility of machines as compared to Interlisp-10, other Lisp systems for the VAX, and other Interlisp implementations where performance data were available; and 3) identify serious obstacles and alternatives to the timely completion of an effective Interlisp-VAX system.
CS-TR-81-879

Report Number: CS-TR-81-880
Institution: Stanford University, Department of Computer Science
Title: Well structured parallel programs are not easier to schedule
Author: Mayr, Ernst W.
Date: September 1981
Abstract: The scheduling problem for unit time task systems with arbitrary precedence constrainls is known to be NP-complete. We show that the same is true even if the precedence constraints are restricted to certain subclasses which make the corresponding parallel programs more structured. Among these classes are those derived from hierarchic cobegin-coend programming constructs, level graph forests, and the parallel or serial composition of an out-tree and an in-tree. In each case, the completeness proof depends heavily on the number of processors being part of the problem instances.
CS-TR-81-880

Report Number: CS-TR-81-883
Institution: Stanford University, Department of Computer Science
Title: On program transformations for abstract data types and concurrency
Author: Pepper, P.
Date: October 1981
Abstract: We study transformation rules for a particular class of abstract data types, namely types that are representable by recursive mode declarations. The transformations are tailored to the development of efficient tree traversal and they allow for concurrency. The techniques are exemplified by an implementation of concurrent insertion and deletion in 2-3-trees.
CS-TR-81-883

Report Number: CS-TR-81-887
Institution: Stanford University, Department of Computer Science
Title: Finding the convex hull of a simple polygon
Author: Graham, Ronald L.
Author: Yao, Frances
Date: November 1981
Abstract: It is well known that the convex hull of a set of n points in the (Euclidean) plane can be found by an algorithm having worst-case complexity O(n log n). In this note we give a short linear algorithm for finding the convex hull in the case that the (ordered) set of points from the vertices of a simple (i.e., non-self-intersecting) polygon.
CS-TR-81-887

Report Number: CS-TR-81-889
Institution: Stanford University, Department of Computer Science
Title: AL users' manual
Author: Mujtaba, Shahid
Author: Goldman, Ron
Date: December 1981
Abstract: AL is a high-level programming language for manipulator control useful in industrial assembly research. This document describes the current state of the AL system now in operation at the Stanford Artificial Intelligence Laboratory, and teaches the reader how to use it. The system consists of the AL compiler and runtime system and the source code interpreter, POINTY, which facilitates specifying representation of parts, and interactive execution of AL statements.
CS-TR-81-889

Report Number: CS-TR-81-894
Institution: Stanford University, Department of Computer Science
Title: Methodology for building an intelligent tutoring system
Author: Clancey, William J.
Date: October 1981
Abstract: Over the past 6 years we have been developing a computer program to teach medical diagnosis. Our research synthesizes and extends results in artlficlal intelligence (Al), medicine, and cognitive psychology. This paper describes the progression of the research, and explalns how theories from these fields are combined in a computational model. The general problem has been to develop an "intelligent tutoring system" by adapting the MYCIN "expert system." Thls conversion requires a deeper understanding of the nature of expertise and explanatlon than origlnally requlred for developlng MYCIN, and a concomitant shift in perspective from slmple performance goals to attaining psychologlcal validity in the program's reasoning process. Others have written extensively about the relatlon of artificlal intelligence to cognltive sclence (e.g., [Pylyshyn, 1978] [Boden, 1977]). Our purpose here is not to repeat those arguments, but to present a case study which will provide a common point for further dlscusslon. To this end, to help evaluate the state of cognitive science, we will outline our methodology and survey what resources and viewpoints have helped our research. We will also discuss pitfalls that other Al-oriented cognitive scientists may encounter. Finally, we will present some questions coming out of our work whlch might suggest possible collaboration with other fields of research.
CS-TR-81-894

Report Number: CS-TR-81-896
Institution: Stanford University, Department of Computer Science
Title: The epistemology of a rule-based expert system: a framework for explanation
Author: Clancey, William J.
Date: November 1981
Abstract: Production rules are a popular representation for encoding heuristic knowledge in programs for scientific and medical problem solving. However, experience with one of these programs, MYCIN, indicates that the representation has serious limitations: people other than the original rule authors find it difficult to modify the rule set, and the rules are unsuitable for use in other settings, such as for application to teaching. These paroblems are rooted in fundamental limitations in MYCIN's original rule representation: the view that expert knowledge can be encoded as a uniform, weakly-structured set of if/then associations is found to be wanting. To illustrate these problems, this paper examines MYCIN's rujles from the perspective of a teacher trying to justify them and to convey a problem-solving approach. We discover that individual rules play different roles, have different kinds of justifications, and are constructed using different rationales for the ordering and choice of premise clauses. This design knowledge, consisting of structural and strategic concepts which lie outside the representation, is shown to be procedurally embedded in the rules. Moreover, because the data/hypothesis associations are themselves a proceduralized form of underlying disease models, they can only be supported by appealing to this deeper level of knowledge. Making explicit this structural, strategic and support knowledge enhances the ability to understand and modify the system.
CS-TR-81-896

Report Number: CS-TR-81-898
Institution: Stanford University, Department of Computer Science
Title: Separability as a physical database design methodology
Author: Whang, Kyu-Young
Author: Wiederhold, Gio
Author: Sagalowicz, Daniel
Date: October 1981
Abstract: A theoretical approach to the optimal design of large multifile physical databases is presented. The design algorithm is based on the theory that, given a set of join methods that satisfy a certain property called "separability," the problem of optimal assignment of access structures to the whole database can be reduced to the subproblem of optimizing individual relations independently of one another. Coupling factors are defined to represent all the interactions among the relations. This approach not only reduces the complexity of the problem significantly, but also provides a better understanding of underlying mechanisms. A closed noniterative formula is introduced for estimating the number of block accesses in a database organization, and the error analyzed. This formula, an approximation of Yao's exact formula, has a maximum error of 3.7%, and significantly reduces the computation time by eliminating the iterative loop. It also achieves a much higher accuracy than an approximation proposed by Cardenas.
CS-TR-81-898

Report Number: CS-TR-82-892
Institution: Stanford University, Department of Computer Science
Title: An algorithm for reducing acyclic hypergraphs
Author: Kuper, Gabriel M.
Date: January 1982
Abstract: This report is a description of an algorithm to compute efflciently the Graham reduction of an acyclic hypergraph with sacred nodes. To apply the algorithm we must already have a tree representation of the hypergraphs, and therefore it is useful when we have a fixed hypergraph and wish to compute Graham reductions many times, as we do in the Systern/U query interpretation algorithm.
CS-TR-82-892

Report Number: CS-TR-82-895
Institution: Stanford University, Department of Computer Science
Title: GLISP users' manual
Author: Novak, Gordon S., Jr.
Date: January 1982
Abstract: GLISP is a high-level, LISP-based language which is compiled into LISP. GLISP provides a powerful abstract datatype facility, allowing description and use of both LISP objects and objects in A.I. representation languages. GLISP language features include PASCAL-like control structures, infix expressions with operators which facilitate list manipulation, and reference to objects in PASCAL-like or English-like syntax. English-like definite reference to features of objects which are in the current computational context is allowed; definite references are understood and compiled relative to a knowledge base of object descriptions. Object-centered programming is supported; GLISP can substantially improve runtime performance of object-centered programs by optimized compilation of references to objects. This manual describes the GLISP language and use of GLISP within INTERLISP.
CS-TR-82-895

Report Number: CS-TR-82-903
Institution: Stanford University, Department of Computer Science
Title: Coloring maps and the Kowalski doctrine
Author: McCarthy, John
Date: April 1982
Abstract: It is attractive to regard an algorithm as composed of the logic determining what the results are and the control determining how the result is obtained. Logic programmers like to regard programming as controlled deduction, and there have been several proposals for controlling the deduction expressed by a Prolog program and not always using Prolog's normal backtracking algorithm. The present note discusses a map coloring program proposed by Pereira and Porto and two coloring algorithms that can be regarded as control applied to its logic. However, the control mechanisms required go far beyond those that have been contemplated in the Prolog literature.
CS-TR-82-903

Report Number: CS-TR-82-908
Institution: Stanford University, Department of Computer Science
Title: Neomycin: reconfiguring a rule-based expert system for application to teaching
Author: Clancey, William J.
Author: Letsinger, Reed
Date: May 1982
Abstract: NEOMYClN is a medical consultation system in which MYClN's knowledge base is reorganized and extended for use in GUIDON, a teaching program. The new system constitutes a psychological model for doing diagnosis designed to provide a basis for interpreting student behavior and teaching diagnostic strategy. The model separates out kinds of knowledge that are procedurally embedded in MYClN's rules and so inaccessible to the teaching program. The key idea is to represent explicitly and separately a domain-independent diagnostic strategy in the form of meta-rules, knowledge about the structure of the problem space, causal and data/hypothesis rules and world facts. As a psychological model, NEOMYCIN captures the forward-directed, "compiled associations" mode of reasoning that characterizes expert behavior. Collection and interpretation of data are focused by the "differential" or working memory of hypotheses. Moreover, the knowledge base is broadened so that GUIDON can teach a student when to consider a specific infectious dlsease and what competing hypotheses to consider, essentially the knowledge a human would need in order to use the MYCIN consultation system properly.
CS-TR-82-908

Report Number: CS-TR-82-909
Institution: Stanford University, Department of Computer Science
Title: Plan recognition strategies in student modeling: prediction and description
Author: London, Bob
Author: Clancey, William J.
Date: May 1982
Abstract: This paper describes the student modeler of the GUIDON2 tutor, which understands plans by a dual search strategy. It first produces multiple predictions of student behavior by a model-driven simulation of the expert. Focused, data-driven searches then explain incongruities. By supplementing each other, these methods lead to an efficient and robust plan understander for a complex domain.
CS-TR-82-909

Report Number: CS-TR-82-910
Institution: Stanford University, Department of Computer Science
Title: Exploration of Teaching and Problem-Solving Strategies, 1979-1982
Author: Clancey, William J.
Author: Buchanan, Bruce
Date: May 1982
Abstract: This is the final report for Contract N-00014-79-C-0302, covering the period of 15 March 1979 through 14 March 1982. The goal of the project was to develop methods for representing teaching and problem-solving knowledge in computer-based tutorial systems. One focus of the work was formulation of principles for managing a case method tutorial dialogue; the other major focus was investigation of the use of a production rule representation for the subject material of a tutorial program. The main theme pursued by this research is that representing teaching and problem-solving knowledge separately and explicitly enhances the ability to build, modify and test complex tutorial programs. Two major computer programs were constructed. One was the tutorial program, GUIDON, which uses a set of explicit "discourse procedures" for carrying on a case method dialogue with a student. GUIDON uses the original MYCIN knowledge base as subject material, and as such, was an experiment in exploring the ways in which production rules can be used in tutoring. GUlDON's teaching knowledge is separate from and compatible with any knowledge base that is encoded in MYClN's rule language. Demonstrations of GUIDON were given for two medical and one engineering application. Thus, the generality of this kind of system goes beyond being able to teach about any problem in a "case library"--it also allows teaching expertise to be transferred and tested in multiple problem domains. The second major program is the consultation program, NEOMYCIN. This is a second generation system in which MYClN's knowledge has been reconfigured to make explicit distinctions that are important for teaching. Unlike MYCIN, the program uses the hypothesis-oriented approach and predominantly forward-directed reasoning. As such, NEOMYCIN is consistent with and extends psychological models of diagnostic problem-solving. The program differs from other knowledge-based Al systems in that reasoning is completely controlled by a set of explicit meta-rules. These meta-rules are domain independent and constitute the diagnostic procedure to be taught to students: the tasks of diagnosis and heuristics for attending to and confirming relevant diagnostic hypotheses.
CS-TR-82-910

Report Number: CS-TR-82-911
Institution: Stanford University, Department of Computer Science
Title: Bibliography of Stanford Computer Science reports, 1963-1982
Author: Roberts, Barbara J.
Author: Marashian, Irris
Date: May 1982
Abstract: This report lists, in chronological order, all reports published by the Stanford Computer Science Department since 1963. Each report is identified by a Computer Science number, author's name, title, National Technical Information Service (NTIS) retrieval number (i.e., AD-XXXXXX), date, and number of pages. If the NTIS number is not given, it means that the report is probably not available from NTIS.
CS-TR-82-911

Report Number: CS-TR-82-912
Institution: Stanford University, Department of Computer Science
Title: The implication and finite implication problems for typed template dependencies
Author: Varde, Moshe Y.
Date: May 1982
Abstract: The class of typed template dependencies is a class of data dependencies that includes embedded multivalued and join dependencies. We show that the implication and the finite implication problems for this class are unsolvable. An immediate corollary is that this class has no formal system for finite implication. We also show how to construct a finite set of typed template dependencies whose implication and finite implication problems are unsolvable. The class of projected join dependencies is a proper subclass of the above class, and it generalizes slightly embedded join dependencies. It is shown that the implication and the finite implication problems for this class are also unsolvable. An immediate corollary is that this class has no universe-bounded formal system for either impllication or finite implication.
CS-TR-82-912

Report Number: CS-TR-82-914
Institution: Stanford University, Department of Computer Science
Title: Using string matching to compress Chinese characters
Author: Guoan, Gu
Author: Hobby, John
Date: May 1982
Abstract: A new method for font compression is introduced and compared to existing methods. A very compact representation is achieved by using a variant of McCreight's string matching algorithm to compress the bounding contour. Results from an actual implementation are given showing the improvement over other methods and how this varies with resolution and character complexity. Compression ratios of up to 150 are achieved for Chinese characters.
CS-TR-82-914

Report Number: CS-TR-82-915
Institution: Stanford University, Department of Computer Science
Title: Verification of concurrent programs: proving eventualities by well-founded ranking
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: May 1982
Abstract: In this paper, one of a series on verification of concurrent programs, we present proof methods for establishing eventuality and until properties. The methods are based on well-founded ranking and are applicable to both "just" and "fair" computations. These methods do not assume a decrcase of the rank at each computation step. It is sufficient that there exists one process which decreases the rank when activated. Fairness then ensures that the program will eventually attain its goal.
CS-TR-82-915

Report Number: CS-TR-82-922
Institution: Stanford University, Department of Computer Science
Title: An approach to verifying completeness and consistency in a rule-based expert system
Author: Suwa, Motoi
Author: Scott, A. Carlisle
Author: Shortliffe, Edward H.
Date: June 1982
Abstract: We describe a program for verifying that a set of rules in an expert system comprehensively spans the knowledge of a specialized domain. The program has been devised and tested within the context of the ONCOCIN System, a rule-based consultant for clinical oncology. The stylized format of ONCOCIN's rules has allowed the automatic detection of a number of common errors as the knowledge base has been developed. This capability suggests a general mechanism for correcting many problems with knowledge base completeness and consistency before they can cause performance errors.
CS-TR-82-922

Report Number: CS-TR-82-923
Institution: Stanford University, Department of Computer Science
Title: Explanatory power for medical expert systems: studies in the representation of causal relationships for clinical consultations
Author: Wallis, Jerold W.
Author: Shortliffe, Edward H.
Date: July 1982
Abstract: This paper reports on experiments designed to identify and implement mechanisms for enhancing the explanation capabilities of reasoning programs for medical consultation. The goals of an explanation system are discussed, as is the additional knowledge needed to meet these goals in a medical domain. We have focussed on the generation of explanations that are appropriate for different types of system users. This task requires a knowledge of what is complex and what is important; it is further strengthened by a classification of the associations or causal mechanisms inherent in the inference rules. A causal representation can also be used to aid in refining a comprehensive knowledge base so that the reasoning and explanations are more adequate. We describe a prototype system which reasons from causal inference rules and generates explanations that are appropriate for the user.
CS-TR-82-923

Report Number: CS-TR-82-926
Institution: Stanford University, Department of Computer Science
Title: Principles of rule-based expert systems
Author: Buchanan, Bruce G.
Author: Duda, Richard O.
Date: August 1982
Abstract: Rule-based expert systems are surveyed. The most important considerations are representation and inference. Rule-based systems make strong assumptions about the representation of knowledge as conditional sentences and about the control of inference in one of three ways. The problem of reasoning with incomplete or inexact information is also discussed, as are several other issues regarding the design of expert systems.
CS-TR-82-926

Report Number: CS-TR-82-927
Institution: Stanford University, Department of Computer Science
Title: Combining state machines and regular expressions for automatic synthesis of VLSI circuits
Author: Ullman, Jeffrey D.
Date: September 1982
Abstract: We discuss a system for translating regular expressions into logic equations or PLA's, with particular attention to how we can obtain both the benefits of regular expressions and state machines as input languages. An extended example of the method is given, and the results of our approach is compared with hand design; in this example we use less than twice the area of a hand-designed, machine optimized PLA.
CS-TR-82-927

Report Number: CS-TR-82-928
Institution: Stanford University, Department of Computer Science
Title: Automated ambulatory medical record systems in the U.S.
Author: Kuhn, Ingeborg M.
Author: Wiederhold, Gio
Author: Rodnick, Jonathan E.
Author: Ramsey-Klee, Diane M.
Author: Benett, Sanford
Author: Beck, Donald D.
Date: August 1982
Abstract: This report presents an overview of the developments in Automated Ambulatory Medical Record Systems (AAMRS) from 1975 to the present. A summary of findings from a 1975 state-of-the-art review is presented along with the current findings of a follow-up study of a selected number of the AAMRS operating today. The studies revealed that effective automated medical record systems have been developed for ambulatory care settings and that they are now in the process of being transferred to other sites or users, either privately or as a commericial product. Since 1975 there have been no significant advances in system design. However, progress has been substantial in terms of achieving production goals. Even though a variety of systems are commercially available, there is a continuing need for research and development to improve the effectiveness of the systems in use today.
CS-TR-82-928

Report Number: CS-TR-82-931
Institution: Stanford University, Department of Computer Science
Title: PUFF: an expert system for interpretation of pulmonary function data
Author: Aikins, Janice S.
Author: Kunz, John C.
Author: Shortliffe, Edward H.
Author: Fallat, Robert J.
Date: September 1982
Abstract: The application of Artificial Intelligence techniques to real-world problems has produced promising research results, but seldom has a system become a useful tool in its domain of expertise. Notable exceptions are the DENDRAL and MOLGEN systems. This paper describes PUFF, a program that interprets lung function test data and has become a working tool in the pulmonary physiology lab of a large hospital. Elements of the problem that paved the way for its success are examined, as are significant limitations of the solution that warrant further study.
CS-TR-82-931

Report Number: CS-TR-82-932
Institution: Stanford University, Department of Computer Science
Title: Expert systems research: modeling the medical decision making process
Author: Shortliffe, Edward H.
Author: Fagan, Lawrence M.
Date: September 1982
Abstract: During the quarter century since the birth of the branch of computer science known as artificial intelligence (AI), much of the research has focused on developing symbolic models of human inference. In the last decade several related AI research themes have come together to form what is now known as "expert systems research." In this paper we review AI and expert systems to acquaint the reader with the field and to suggest ways in which this research will eventually be applied to advanced medical monitoring.
CS-TR-82-932

Report Number: CS-TR-82-933
Institution: Stanford University, Department of Computer Science
Title: An algorithmic method for studying percolation clusters
Author: Klein, Shmuel T.
Author: Shamir, Eli
Date: September 1982
Abstract: In percolation theory one studies configurations, based on some infinite lattice, where the sites of the lattice are randomly made F (full) with probability p or E (empty) with probability 1-p. For p > $p_c$, the set of configurations which contain an infinite cluster (a connectivity component) has probability 1. Using an algorithmic method and a rearrangement lemma for Bernoulli sequences, we compute the boundary-to-body quotient of infinite clusters and prove it has the definite value (1-p)/p with probability 1.
CS-TR-82-933

Report Number: CS-TR-82-947
Institution: Stanford University, Department of Computer Science
Title: Modelling degrees of item interest for a general database query system
Author: Rowe, Neil C.
Date: April 1982
Abstract: Many databases support decision-making. Often this means choices between alternatives according to partly subjective or conflicting criteria. Database query languages are generally designed for precise, logical specification of the data of interest, and tend to be awkward in the aforementioned circumstances. Information retrieval research suggests several solutions, but there are obstacles to generalizing these ideas to most databases. To address this problem we propose a methodology for automatically deriving and monitoring "degrees of interest" among alternatives for a user of a database system. This includes (a) a decision theory model of the value of information to the user, and (b) inference mechanisms, based in part on ideas from artificial intelligence, that can tune the model to observed user behavior. This theory has important applications to improving efficiency and cooperativeness of the interface between a decision-maker and a database system.
CS-TR-82-947

Report Number: CS-TR-82-949
Institution: Stanford University, Department of Computer Science
Title: The r-Stirling numbers
Author: Broder, Andrei Z .
Date: December 1982
Abstract: The r-Stirling numbers of the first and second kind count restricted permutations and respectively restricted partitions, the restriction being that the first r elements must be in distinct cycles and respectively distinct subsets. The combinatorial and algebraic properties of these numbers, which is most cases generalize similar properties of the regular Stirling numbers, are explored starting from the above definition.
CS-TR-82-949

Report Number: CS-TR-82-950
Institution: Stanford University, Department of Computer Science
Title: Learning physical description from functional definitions, examples and precedents
Author: Winston, Patrick H.
Author: Binford, Thomas O.
Author: Katz, Boris
Author: Lowry, Michael
Date: January 1983
Abstract: It is too hard to tell vision systems what things look like. It is easier to talk about purpose and what things are for. Consequently, we want vision systems to use functional descriptions to identify things, when necessary, and we want them to learn physical descriptions for themselves, when possible. This paper describes a theory that explains how to make such systems work. The theory is a synthesis of two sets of ideas: ideas about learning from precedents and exercises developed at MIT and ideas about physical description developed at Stanford. The strength of the synthesis is illustrated by way of representative experiments. All of these experiments have been performed with an implementation system.
CS-TR-82-950

Report Number: CS-TR-82-951
Institution: Stanford University, Department of Computer Science
Title: Five paradigm shifts in programming language design and their realization in Viron, a dataflow programming environment
Author: Pratt, Vaughan
Date: December 1982
Abstract: We describe five paradigm shifts in programming language design, some old and some relatively new, namely Effect to Entity, Serial to Parallel, Partition Types to Predicate Types, Computable to Definable, and Syntactic Consistency to Semantic Consistency. We argue for the adoption of each. We exhibit a programming language, Viron, lhat capitalizes on these shifts.
CS-TR-82-951

Report Number: CS-TR-82-953
Institution: Stanford University, Department of Computer Science
Title: Partial bibliography of work on expert systems
Author: Buchanan, Bruce G.
Date: December 1982
Abstract: Since 1971 many publications on expert systems have appeared in conference proceedings and the technical literature. Over 200 titles are listed in the bibliography. Many relevant publications are omitted because they overlap publications on the list; others should be called to my attention.
CS-TR-82-953

Report Number: CS-TR-82-998
Institution: Stanford University, Department of Computer Science
Title: Knowledge engineering: a daily activity on a hospital ward
Author: Mulsant, Benoit
Author: Servan-Schreiber, David
Date: September 1983
Abstract: Two common barriers against the development and diffusion of Expert Systems in Medicine are the difficulty of design and the low level of acceptance. This paper reports on an original experience which entails potential solutions of these issues: the task of Knowledge Engineering is performed by medical students and residents on a hospital ward using a sophisticated Knowledge Acquisition System, EMYCIN. The Knowledge Engineering sessions are analysed in detail and a structured method is proposed. A transcript of a sample run of the resulting program is presented along with an evaluation of its performance, acceptance, educational potential and amount of endeavour required. The impact of the Knowledge Engineering process itself is then assessed both from the residents and the medical students standpoint. Finally, the possibility of generalizing the experiment is examined.
CS-TR-82-998

Report Number: CS-TR-83-945
Institution: Stanford University, Department of Computer Science
Title: Perseus: retrospective on a portable operating system
Author: Z waenepoel, Willy
Author: Lantz, Keith A.
Date: February 1983
Abstract: We describe the operating system Perseus, developed as part of a study into the issues of computer communications and their impact on operating system and programming language design. Perseus was designed to be portable by virtue of its kernel-based structure and its implementation in Pascal. In particular, machine-dependent code is limited to the kernel and most operating systems functions are provided by server processes, running in user mode. Perseus was designed to evolve into a distributed operating system by virtue of its interprocess communication facilities, based on message-passing. This paper presents an overview of the system and gives an assessment of how far it satisfied its original goals. Specifically, we evaluate its interprocess communication facilities and kernel-based structure, followed by a discussion of portability. We close with a brief history of the project, pointing out major milestones and stumbling blocks along the way.
CS-TR-83-945

Report Number: CS-TR-83-962
Institution: Stanford University, Department of Computer Science
Title: Bibliography of Stanford Computer Science reports, 1963-1983
Author: Berg, Kathryn A.
Date: March 1983
Abstract: This report lists, in chronological order, all reports published by the Stanford Computer Science Department since 1963. Each report is identified by a Computer Science number, author's name, title, National Technical Information Service (NTIS) retrieval number (i.e., AD-XXXXXX), date, and number of pages. If an NTIS number is not given, it means that the report is probably not available from NTIS.
CS-TR-83-962

Report Number: CS-TR-83-963
Institution: Stanford University, Department of Computer Science
Title: A hardware semantics based on temporal intervals
Author: Halpern, Joseph
Author: Manna, Z ohar
Author: Moszkowski, Ben
Date: March 1983
Abstract: We present an interval-based temporal logic that permits the rigorous specification of a variety of hardware components and facilitates describing properties such as correctness of implementation. Conceptual levels of circuit operation ranging from detailed quantitative timing and signal propagation up to functional behavior are integrated in a unified way. After giving some motivation for reasoning about hardware, we present the propositional and first-order syntax and semantics of the temporal logic. In addition we illustrate techniques for describing signal transitions as well as for formally specifying and comparing a number of delay models. Throughout the discussion, the formalism provides a means for examining such concepts as device equivalence and internal states.
CS-TR-83-963

Report Number: CS-TR-83-964
Institution: Stanford University, Department of Computer Science
Title: Proving precedence properties: the temporal way
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: April 1983
Abstract: This paper explores the three important classes of temporal properties of concurrent programs: invariance, liveness and precedence. It presents the first methodological approach to the precedence properties, while providing a review of the invariance and liveness properties. The approach is based on the 'unless' operator, which is a weak version of the 'until' operator. For each class of properties, we present a single complete proof principle. Finally, we show that the properties of each class are decidable over finite state programs.
CS-TR-83-964

Report Number: CS-TR-83-965
Institution: Stanford University, Department of Computer Science
Title: An approach to type design and text composition in Indian scripts
Author: Ghosh, Pijush K.
Date: April 1983
Abstract: The knowledge of letters exerts a dual enchantment. When it uncovers the relationships between a series of arbitrary symbols and the sounds of speech, it fills us with joy. For others the visible expression of the letters, their graphical forms, their history and their development become fascinating. The advent of digital information technology has opened new vistas in the concept of letter forms. Unfortunately the graphics industry in India has remained almost unaffected by these technological advances, especially in the field of type design and text composition. This report strives to demonstrate how to use various tools and techniques, so that the new technology can cope with the plurality of Indian scripts. To start with all you need to know is the basic shapes of the letters of the Roman alphabet and the sounds they represent. With this slender thread of knowledge an enjoyable study of letter design and text composition in Indian scripts can begin.
CS-TR-83-965

Report Number: CS-TR-83-966
Institution: Stanford University, Department of Computer Science
Title: A formal approach to lettershape description for type design
Author: Ghosh, Pijush K.
Author: Bigelow, Charles A.
Date: May 1983
Abstract: This report is designed to explore some analytic means of specifying lettershapes. Computer representation and analysis of lettershape have made use of two diametrically different approaches, one representing a shape by its boundary, the other by its skeleton or medial axis. Generally speaking, the boundary representation is conceptually simpler to the designer, but the skeletal representation provides more insight into the "piecedness" of the shape. Donald Knuth's METAFONT is one of the sophisticated lettering design systems which has basically adopted the medial axis approach. Moreover, the METAFONT system has introduced the idea of metafont-description of a letter, i.e., to give a rigorous definition of the shape of a letter in such a way that many styles are obtained from a single definition by changing only a few user-defined parameters. That is why we have considered the METAFONT system as our starting point and have shown how we can arrive at the definition of a formal language for specifying lettershapes. We have also introduced a simple mathematical model for decomposing a letter into its constituent elements.
CS-TR-83-966

Report Number: CS-TR-83-967
Institution: Stanford University, Department of Computer Science
Title: Verification of concurrent programs: a temporal proof system
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: June 1983
Abstract: A proof system based on temporal logic is presented for proving properties of concurrent programs based on the shared-variables computation model. The system consists of three parts: the general uninterpreted part, the domain dependent part and the program dependent part. In the general part we give a complete proof system for first-order temporal logic with detailed proofs of useful theorems. This logic enables reasoning about general time sequences. The domain dependent part characterizes the special properties of the domain over which the program operates. The program dependent part introduces program axioms which restrict the time sequences considered to be execution sequences of a given program. The utility of the full system is demonstrated by proving invariance, liveness and precedence properties of several concurrent programs. Derived proof principles for these classes of properties are obtained and lead to a compact representation of proofs.
CS-TR-83-967

Report Number: CS-TR-83-969
Institution: Stanford University, Department of Computer Science
Title: Reasoning in interval temporal logic
Author: Moszkowski, Ben
Author: Manna, Z ohar
Date: July 1983
Abstract: Predicate logic is a powerful and general descriptive formalism with a long history of development. However, since the logic's underlying semantics have no notion of time, statements such as "I increases by 2" cannot be directly expressed. We discuss interval temporal logic (ITL), a formalism that augments standard predicate logic with operators for time-dependent concepts. Our earlier work used ITL to specify and reason about hardware. In this paper we show how ITL can also directly capture various control structures found in conventional programming languages. Constructs are given for treating assignment, iteration, sequential and parallel computations and scoping. The techniques used permit specification and reasoning about such algorithms as concurrent Quicksort. We compare ITL with the logic-based programming languages Lucid and Prolog.
CS-TR-83-969

Report Number: CS-TR-83-971
Institution: Stanford University, Department of Computer Science
Title: Letterform design systems
Author: Ruggles, Lynn
Date: April 1983
Abstract: The design of letterforms requires a skilled hand, an eye for fine detail and an understanding of the letterforms themselves. This work has traditionally been done by experienced artisans, but in the last fifteen years there have been attempts to integrate the design process with the use of computers in order to create digital type forms. The use of design systems for the creation of these digital forms has led to an analysis of the way type designs are created by type designers. Their methods have been integrated into a variety of systems for creating digital forms. This paper describes these design systems and discusses the relevant issues for the success of the systems that exist and are used today.
CS-TR-83-971

Report Number: CS-TR-83-972
Institution: Stanford University, Department of Computer Science
Title: Experience with a regular expression compiler
Author: Karlin, Anna R.
Author: Trickey, Howard W.
Author: Ullman, Jeffrey D.
Date: June 1983
Abstract: The language of regular expressions is a useful one for specifying certain sequebtial processes at a very high level. They allow easy modification of designs for circuits, like controllers, that are described by patterns of events they must recognize and the responses they must make to those patterns. This paper discusses the compilation of such expressions into reasonably compact layouts. The translation of regular expressions into nondeterministic automata by two different methods is discussed, along with the advantages of each method. A major part of the compilation problem is selection of good state codes for the nondeterministic automata; one successful strategy is explained in the paper.
CS-TR-83-972

Report Number: CS-TR-83-973
Institution: Stanford University, Department of Computer Science
Title: The distributed V kernel and its performance for diskless workstations
Author: Cheriton, David R.
Author: Z waenepoel, Willy
Date: July 1983
Abstract: The distributed V kernel is a message-oriented kernel that provides uniform local and network interprocess communication. It is primarily being used in an environment of diskless workstations connected by a high-speed local network to a set of file servers. We describe a performance evaluation of the kernel, with particular emphasis on the cost of network file access. Our results show that over a local network: 1. Diskless workstations can access remole files with minimal performance penalty. 2. The V message facility can be used to access remote files at comparable cost to any well-tuned specialized file access protocol. We conclude that it is feasible to build a distributed system with all network communication using the V message facility even when most of the network nodes have no secondary storage.
CS-TR-83-973

Report Number: CS-TR-83-974
Institution: Stanford University, Department of Computer Science
Title: A Chinese Meta-Font
Author: Hobby, John
Author: Guoan, Gu
Date: July 1983
Abstract: METAFONT is Donald E. Knuth's system for alphabet design. The system allows an entire family of fonts or "meta-fonts" to be specified precisely and mathematically so that it can be produced in different sizes and styles for different raster devices. We present a new technique for defining Chinese characters hierarchically with METAFONT. We define METAFONT subroutines for commonly used portions of strokes and then combine some of these into routines for drawing complete strokes. Parameters describe the skeletons of the strokes and the stroke routines are carefully designed to transform themselves appropriately. This allows us to handle all of the basic strokes with only 14 different routines. The stroke routines in turn are used to build up groups of strokes and radicals. Special routines for positioning control points ensure that the strokes will join properly in a variety of different styles. The radical routines are parameterized to allow them to be placed at different locations in the typeface and to allow for adjusting their size and shape. Key points are positioned relative to the bounding box for the radical, and the special positioning routines find other points that must be passed to the stroke routines. We use this method to design high quality Song style characters. Global parameters control the style, and we show how these can be used to create Song and Long Song from the same designs. Other settings can produce other familiar styles or even new styles. We show how it is possible to create completely different styles, such as Bold style, merely by substituting different stroke routines. The global parameters can be used to augment simple scaling by altering stroke width and other details to account for changes in size. We can adjust stroke widths to help even out the overall darkness of the characters. We also show how it is possible to experiment with new ideas such as adjusting character widths individually. While many of our characters are based on existing designs, the stroke routines facilitate the design of new characters without the need to refer to detailed drawings. The skeletal parameters and special positioning routines make it easy to position the strokes properly. In our previous paper, in contrast to this, we parameterized the strokes according to their boundaries and copied an existing design. The previous approach made it very difficult to create different styles with the same METAFONT program.
CS-TR-83-974

Report Number: CS-TR-83-980
Institution: Stanford University, Department of Computer Science
Title: The WEB system of structured documentation
Author: Knuth, Donald E.
Date: September 1983
Abstract: This memo describes how to write programs in the WEB language (Version 2.3, September 1983); and it also includes the full WEB documentation for WEAVE and TANGLE, the programs that read WEB input and produce TEX and PASCAL output, respectively.
CS-TR-83-980

Report Number: CS-TR-83-985
Institution: Stanford University, Department of Computer Science
Title: First grade TEX: a beginner's TEX manual
Author: Samuel, Arthur L.
Date: November 1983
Abstract: This is an introductory ready-reference TEX82 manual for the beginner who would like to do First Grade TEX work. Only the most basic features of the TEX system are discussed in detail. Other features are summarized in an appendix and references are given to the more complete documentation available elsewhere.
CS-TR-83-985

Report Number: CS-TR-83-989
Institution: Stanford University, Department of Computer Science
Title: A programming and problem-solving seminar
Author: Knuth, Donald E.
Author: Weening, Joseph S.
Date: December 1983
Abstract: This report contains edited transcripts of the discussions held in Stanford's course CS 204, Problem Seminar, during autumn quarter 1981. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be of interest to graduate students of computer science at other universities, as well as to their professors and to professional people in the "real world." The present report is the fourth in a series of such transcripts, continuing the tradition established in CS606 (Michael J. Clancy, 1977), CS707 (Chris Van Wyk, 1979), and CS863 (Allan A. Miller, 1981).
CS-TR-83-989

Report Number: CS-TR-83-990
Institution: Stanford University, Department of Computer Science
Title: A programming and problem-solving seminar
Author: Hobby, John D.
Author: Knuth, Donald E.
Date: December 1983
Abstract: This report contains edited transcripts of the discussions held in Stanford's course CS204, Problem Seminar, during autumn quarter 1982. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be of interest to graduate students of computer science at other universities, as well as to their professors and to professional people in the "real world." The present report is the fifth in a series of such transcripts, continuing the tradition established in STAN-CS-77-606 (Michael J. Clancy, 1977), STAN-CS-79-707 (Chris Van Wyk, 1979), STAN-CS-81-863 (Allan A. Miller, 1981), STAN-CS-83-989 (Joseph S. Weening, 1983).
CS-TR-83-990

Report Number: CS-TR-83-991
Institution: Stanford University, Department of Computer Science
Title: Parallel algorithms for arithmetics, irreducibility and factoring of GFq-polynomials
Author: Morgensteren, Moshe
Author: Shamir, Eli
Date: December 1983
Abstract: A new algorithm for testing irreducibility of polynomials over finite fields without gcd computations makes it possible to devise efficient parallel algorithms for polynomial factorization. We also study the probability that a random polynomial over a finite field has no factors of small degree.
CS-TR-83-991

Report Number: CS-TR-83-992
Institution: Stanford University, Department of Computer Science
Title: The language of an interactive proof checker
Author: Ketonen, Jussi
Author: Weening, Joseph S.
Date: December 1983
Abstract: We describe the underlying language for EKL, an interactive theorem-proving system currently under development at the Stanford Artificial Intelligence Laboratory. Some of the reasons for its development as well as its mathematical properties are discussed.
CS-TR-83-992

Report Number: CS-TR-83-994
Institution: Stanford University, Department of Computer Science
Title: Sorting by recursive partitioning
Author: Chapiro, Daniel M.
Date: December 1983
Abstract: We present a new O(n lg lg n) time sort algorithm that is more robust than O(n) distribution sorting algorithms. The algorithm uses a recursive partition-concatenate approach, partitioning each set into a variable number of subsets using information gathered dynamically during execution. Sequences are partitioned using statistical information computed during the sort for each sequence. Space complexity is O(n) and is independent from the order and distributlon of the data. lf the data is originally in a list, only O($\sqrt{n}$) extra space is necessary. The algorithm is insensitive to the initial ordering of the data, and it is much less sensitive to the distribution of the values of the sorting keys than distribution sorting algorithms. Its worst-case time is O(n lg lg n) across all distributions that satisfy a new "fractalness" criterion. This condition, which is sufficient but not necessary, is satisfied by any set with bounded length keys and bounded repetition of each key. If this condition is not satisfied, its worst case performance degrades gracefully to O(n lg n). In practice, this occurs when the density of the distribution over $\Omega$(n) of the keys is a fractal curve (for sets of numbers whose values are bounded), or when the distribution has very heavy tails with arbitrarily long keys (for sets of numbers whose precision is bounded). In some preliminary tests, it was faster than Quicksort for sets of more than 150 elements. The algorithm is practical, works basically "in place", can be easily implemented and is particularly well suited both for parallel processing and for external sorting.
CS-TR-83-994

Report Number: CS-TR-83-995
Institution: Stanford University, Department of Computer Science
Title: The advantages of abstract control knowledge in expert system design
Author: Clancey, William J.
Date: November 1983
Abstract: A poorly designed knowledge base can be as cryptic as an arbitrary program and just as difficult to maintain. Representing control knowledge abstractly, separately from domain facts and relations, makes the design more transparent and explainable. A body of abstract control knowledge provides a generic framework for constructing knowledge bases for related problems in other domains and also provides a useful starting point for studying the nature of strategies.
CS-TR-83-995

Report Number: CS-TR-83-996
Institution: Stanford University, Department of Computer Science
Title: Strategic explanations for a diagnostic consultation system
Author: Hasling, Diane Warner
Author: Clancey, William J.
Author: Rennels, Glenn
Date: November 1983
Abstract: This paper examines the problem of automatic explanation of reasoning, especially as it relates to expert systems. By explanation we mean the ability of a program to discuss what it is doing in some understandable way. We first present a general framework in which to view explanation and review some of the research done in this area. We then focus on the explanation system for NEOMYCIN, a medical consultation program. A consultation program interactively helps a user to solve a problem. Our goal is to have NEOMYCIN explain its problem-solving strategies. An explanation of strategy describes the plan the program is using to reach a solution. Such an explanation is usually concrete, referring to aspects of the current problem situation. Abstract explanations articulate a general principle, which can be applied in different situations; such explanations are useful in teaching and in explaining by analogy. We describe the aspects of NEOMYCIN that make abstract strategic explanations possible--the representation of strategic knowledge explicitly and separately from domain knowledge--and demonstrate how this representation can be used to generate explanations.
CS-TR-83-996

Report Number: CS-TR-84-1003
Institution: Stanford University, Department of Computer Science
Title: Parallelism and greedy algorithms
Author: Anderson, Richard
Author: Mayr, Ernst
Date: April 1984
Abstract: A number of greedy algorithms are examined and are shown to be probably inherently sequential. Greedy algorithms are presented for finding a maximal path, for finding a maximal set of disjoint paths in a layered dag, and for finding the largest induced subgraph of a graph that has all vertices of degree at least k. It is shown that for all of these algorithms, the problem of determining if a given node is in the solution set of the algorithm is P-complete. This means that it is unlikely that these sequential algorithms can be sped up significantly using parallelism.
CS-TR-84-1003

Report Number: CS-TR-84-1004
Institution: Stanford University, Department of Computer Science
Title: A computational theory of higher brain function
Author: Goldschlager, Leslie M.
Date: April 1984
Abstract: The higher functions of the brain are believed to occur in the cortex. This region of the brain is modelled as a memory surface which performs both storage and computation. Concepts are modelled as patterns of activity on the memory surface, and the model explains how these patterns interact with one another to give the computations which the brain performs. The method of interaction can explain the formation of abstract concepts, association of ideas and train of thought. It is shown that creativity, self, consciousness and free will are explainable within the same framework. A theory of sleep is presented which is consistent with the model.
CS-TR-84-1004

Report Number: CS-TR-84-1005
Institution: Stanford University, Department of Computer Science
Title: Adequate proof principles for invariance and liveness properties of concurrent programs
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: May 1984
Abstract: This paper presents proof principles for establishing invariance and liveness properties of concurrent programs. Invariance properties are established by systematically checking that they are preserved by every atomic instruction in the program. The methods for establishing liveness properties are based on 'well-founded asserations' and are applicable to both "just" and "fair" computations. These methods do not assume a decrease of the rank at each computation step. It is sufficient that there exists one process which decreases the rank when activated. Fairness then ensures that the program will eventually attain its goal. In the finite state case such proofs can be represented by diagrams. Several examples are given.
CS-TR-84-1005

Report Number: CS-TR-84-1006
Institution: Stanford University, Department of Computer Science
Title: EKL - an interactive proof checker user's reference manual
Author: Ketonen, Jussi
Author: Weening, Joseph S.
Date: June 1984
Abstract: EKL is an interactive proof checker and constructor. Its main goal is to facilitate the checking of mathematical proofs. Some of the special features of EKL are: * The language of EKL can be extended all the way to finite-order predicate logic with typed lambda-calculus. * Several proofs can be handled at the same time. * Metatheoretic reasoning allows formal extensions of the capabilities of EKL. * EKL is a programmable system. The MACLISP language is available to the user, and LISP functions can be written to create input to EKL, thereby allowing expression of proofs in an arbitrary input language. This document is a reference manual for EKL. Each of the sections discusses a major part of the language, beginning with an overview of that area, and proceeding to a detailed discussion of available features. To gain an acquaintance with EKL, it is recommended that you read only the introductory part of each section. EKL may be used both at the Stanford Artificial Intelligence Laboratory (SAIL) computer system, and on DEC TOPS-20 systems that support MACLISP.
CS-TR-84-1006

Report Number: CS-TR-84-1007
Institution: Stanford University, Department of Computer Science
Title: Queue-based multi-processing Lisp
Author: Gabriel, Richard P.
Author: McCarthy, John
Date: June 1984
Abstract: This report presents a dialect of Lisp, called QLAMBDA, which supports multi-processing. Along with the definition of the dialect, the report presents programming examples and performance studies of some programs written in QLAMBDA. Unlike other proposed multi-processing Lisps, QLAMBDA provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
CS-TR-84-1007

Report Number: CS-TR-84-1009
Institution: Stanford University, Department of Computer Science
Title: Complexity of a top-down capture rule
Author: Sagiv, Yehoshua
Author: Ullman, Jeffrey D.
Date: July 1984
Abstract: Capture rules were introduced in [U] as a method for planning the evaluation of a query expressed in first-order logic. We examine a capture rule that is substantiated by a simple top-down implementation of restricted Horn clause logic. A necessary and sufficient condition for the top-down algorithm to converge is shown. It is proved that, provided there is a bound on the number of arguments of predicates, the test can be performed in polynomial time; however, if the arity of predicates is made part of the input, then the problem of deciding whether the top-down algorithm converges is NP-hard. We then consider relaxation of some of our constraints on the form of the logic, showing that success of the top-down algorithm can still be tested in polynomial time if the number of arguments is limited and in exponential time if not.
CS-TR-84-1009

Report Number: CS-TR-84-1012
Institution: Stanford University, Department of Computer Science
Title: TABLOG: the deductive-tableau programming language
Author: Malachi, Yonathan
Author: Manna, Z ohar
Author: Waldinger, Richard
Date: June 1984
Abstract: TABLOG (Tableau Logic Programming Language) is a language based on first-order predicate logic with equality that combines functional and logic programming. TABLOG incorporates advantages of LISP and PROLOG. A program in TABLOG is a list of formulas in a first-order logic (including equality, negation, and equivalence) that is more general and more expressive than PROLOG's Horn clauses. Whereas PROLOG programs must be relational, TABLOG programs may define either relations or functions. While LISP programs yield results of a computation by returning a single output value, TABLOG programs can be relations and can produce several results simultaneously through their arguments. TABLOG employs the Manna-Waldinger deductive-tableau proof system as an interpreter in the same way that PROLOG uses a resolution-based proof system. Unification is used by TABLOG to match a call with a line in the program and to bind arguments. The basic rules of deduction used for computing are nonclausal resolution and rewriting by means of equality and equivalence. A pilot interpreter for the language has been implemented.
CS-TR-84-1012

Report Number: CS-TR-84-1014
Institution: Stanford University, Department of Computer Science
Title: A P-complete problem and approximations to it
Author: Anderson, Richard
Author: Mayr, Ernst W.
Date: September 1984
Abstract: The P-complete problem that we will consider is the High Degree Subgraph Problem. This problem is: given a graph G = (V,E) and an integer k, find the maximum induced subgraph of G that has all nodes of degree at least k. After showing that this problem is P-complete, we will discuss two approaches to finding approximate solutions to it in NC. We will give a variant of the problem that is also P-complete that can be approximated to within a factor of c in NC, for any c < 1/2, but cannot be approximated by a factor of better than 1/2 unless P = NC. We will also give an algorithm that finds a subgraph with moderately high minimum degree. This algorithm exhibits an interesting relationship between its performance and the time it takes.
CS-TR-84-1014

Report Number: CS-TR-84-1018
Institution: Stanford University, Department of Computer Science
Title: Classification problem solving
Author: Clancey, William J.
Date: July 1984
Abstract: A broad range of heuristic programs--embracing forms of diagnosis. catalog selection, and skeletal planning--accomplish a kind of well-structured problem solving called classification. These programs have a characteristic inference structure that systematically relates data to a pre-enumerated set of solutions by abstraction, heuristic association, and refinement. This level of description specifies the knowledge needed to solve a problem, independent of its representation in a particular computer language. The classification problem-solving model provides a useful framework for recognizing and representing similar problems, for designing representation tools, and for understanding the problem-solving methods used by non-classification programs.
CS-TR-84-1018

Report Number: CS-TR-84-1023
Institution: Stanford University, Department of Computer Science
Title: A method for managing evidential reasoning in a hierarchical hypothesis space
Author: Gordon, Jean
Author: Shortliffe, Edward H.
Date: September 1984
Abstract: No abstract.
CS-TR-84-1023

Report Number: CS-TR-84-1024
Institution: Stanford University, Department of Computer Science
Title: How to share memory in a distributed system
Author: Upfal, Eli
Author: Wigderson, Avi
Date: October 1984
Abstract: We study the power of shared-memory in models of parallel computation. We describe a novel distributed data structure that eliminates the need for shared memory without significantly increasing the run time of the parallel computation. More specifically we show how a complete network of processors can deterministically simulate one PRAM step in O(log n ${(loglog n)}^2$) time, when both models use n processors, and the size of the PRAM's shared memory is polynomial in n. (The best previously known upper bound was the trivial O(n)). We also establish that this upper bound is nearly optimal. We prove that an on-line simulation of T PRAM steps by a complete network of processors requires $\Omega (T log n/loglog n)$ time. A simple consequence of the upper bound is that an Ultracomputer (the only currently feasible general purpose parallel machine), can simulate one step of a PRAM (the most convenient parallel model to program), in O(${(log n loglog n)}^2$) steps.
CS-TR-84-1024

Report Number: CS-TR-84-1025
Institution: Stanford University, Department of Computer Science
Title: Fast scheduling algorithms on parallel computers
Author: Helmbold, David
Author: Mayr, Ernst
Date: November 1984
Abstract: With the introduction of parallel processing, scheduling problems have generated great interest. Although there are good sequential algorithms for many scheduling problems, there are few fast parallel scheduling algorithms. In this paper we present several good scheduling algorithms that run on EREW PRAMS. For the unit time execution case, we have algorithms that will schedule n jobs with intree or outtree precedence constraints in O(log n) time. The intree algorithm requires $n^3$ processors, and the outtree algorithm requires $n^4$ processors. Another type of scheduling problem is list scheduling, where a list of n jobs with integer execution times is to be scheduled in list order. We show that the general list scheduling problem on two identical processors is polynomial-time complete, and therefore is not likely to have a fast parallel algorithm. However, when the length of the (binary representation of the) execution times is bounded by O($log^c$ n) there is an NC algorithm using $n^4$ processors.
CS-TR-84-1025

Report Number: CS-TR-84-1027
Institution: Stanford University, Department of Computer Science
Title: A torture test for TEX
Author: Knuth, Donald E.
Date: November 1984
Abstract: Programs that claim to be implementations of TEX82 are supposed to be able to process the test routine contained in this report, producing the outputs contained in this report.
CS-TR-84-1027

Report Number: CS-TR-84-1028
Institution: Stanford University, Department of Computer Science
Title: Parallel graph algorithms
Author: Hochschild, Peter H.
Author: Mayr, Ernst W.
Author: Siegel, Alan R.
Date: December 1984
Abstract: This paper presents new paradigms to solve efficiently a variety of graph problems on parallel machines. These paradigms make it possible to discover and exploit the "parallelism" inherent in many classical graph problems. We abandon attempts to force sequential algorithms into parallel environments for such attempts usually result in transforming a good uniprocessor algorithm into a hopelessly greedy parallel algorithm. We show that by employing more local computation and mild redundance, a variety of problems can be solved in a resource- and time-efficient manner on a variety of architectures.
CS-TR-84-1028

Report Number: CS-TR-84-1032
Institution: Stanford University, Department of Computer Science
Title: Solving the Prisoner's Dilemma
Author: Genesereth, Michael R.
Author: Ginsberg, Matthew L.
Author: Rosenschein, Jeffrey S.
Date: November 1984
Abstract: A framework is proposed for analyzing various types of rational interaction. We consider a variety of restrictions of participants' moves; each leads to a diferent characterization of rational behavior. Under an assumption of "common rationality," it is proven that participants will cooperate, rather than defect, in the Prisoner's Dilemma.
CS-TR-84-1032

Report Number: CS-TR-84-1034
Institution: Stanford University, Department of Computer Science
Title: BB1: an architecture for blackboard systems that control, explain, and learn about their own behavior
Author: Hayes-Roth, Barbara
Date: December 1984
Abstract: BB1 implements a domain-independent "blackboard control architecture" for Al systems that control, explain, and learn about their own problem-solving behavior. A BB1 system comprises: a user-defined domain blackboard, a pre-defined control blackboard, user-defined domain and control knowledge sources, a few generic control knowledge sources, and a pre-defined basic control loop. The architecture's run time user interface provides capabilities for: displaying the blackboard, knowledge sources, and pending knowledge source actions, recommending an action for execution, explaining a recommendation, accepting a user's override, executing a designated action, and running without user intervention. BB1 supports a variety of control behavior ranging from execution of pre-defined control procedures to dynamic construction and modification of complex control plans during problem solving. It explains problem-solving actions by showing their roles in the underlying control plan. It learns new control heuristics from experience, applies them within the current problem-solving session, and uses them to construct new control plans in subsequent sessions.
CS-TR-84-1034

Report Number: CS-TR-85-1035
Institution: Stanford University, Department of Computer Science
Title: RESIDUE: a deductive approach to design synthesis
Author: Finger, J. J.
Author: Genesereth, Michael R.
Date: January 1985
Abstract: We present a new approach to deductive design synthesis, the Residue Approach, in which designs are represented as sets of constraints. Previous approaches, such as PROLOG [18] or the work of Manna and Waldinger [11], express designs as bindings on single terms. We give a complete and sound procedure for finding sets of propositions constituting a legal design. The size of the search space of the procedure and the advantages and disadvantages of the Residue Approach are analysed. In particular we show how Residue can avoid backtracking caused by making design decisions of overly coarse granularity. In contrast, it is awkward for the single term approaches to do the same. In addition we give a rule for constraint propagation in deductive synthesis, and show its use in pruning the design space. Finally, Residue is related to other work, in particular, to Default Logic [16] and to Assumption-Based Truth Maintenance [1].
CS-TR-85-1035

Report Number: CS-TR-85-1036
Institution: Stanford University, Department of Computer Science
Title: Learning control heuristics in BB1
Author: Hayes-Roth, Barbara
Author: Hewett, Micheal
Date: January 1985
Abstract: BB1, a blackboard system building architecture, ameliorates the knowledge acquisition bottleneck with generic knowledge sources that learn control heuristics. Some learning knowledge sources replace the knowledge engineer, interacting directly with domain experts. Others operate autonomously. The paper presents a trace from the illustrative knowledge source. Understand-Preference, running in PROTEAN, a blackboard system for elucidating protein structure. Understand-Preference is triggered when a domain expert overrides one of BB1's scheduling recommendations. It identifies and encodes the heuristic underlying the expert's scheduling decision. The trace illustrates how learning knowledge sources exploit BB1's rich representation of domain and control knowledge, actions, and resuits.
CS-TR-85-1036

Report Number: CS-TR-85-1037
Institution: Stanford University, Department of Computer Science
Title: Expressiveness and language choice
Author: MacKinlay, Jock
Author: Genesereth, Michael R.
Date: January 1985
Abstract: Specialized languages are often more appropriate than general languages for expressing certain information. However, specialized languages must be chosen carefully because they do not allow all sets of facts to be stated. This paper considers the problems associated with choosing among specialized languages. Methods are presented for determining that a set of facts is expressible in a language, for identifying when additional facts are stated accidentally, and for choosing among languages that can express a set of facts. This research is being used to build a system that automatically chooses an appropriate graphical language to present a given set of facts.
CS-TR-85-1037

Report Number: CS-TR-85-1038
Institution: Stanford University, Department of Computer Science
Title: Uniform hashing is optimal
Author: Yao, Andrew C.
Date: January 1985
Abstract: It was conjectured by J. Ullman that uniform hashing is optimal in its expected retrieval cost among all open-address hashing schemes (JACM 19 (1972), 569-575). In this paper we show that, for any open-address hashing scheme, the expected cost of retrieving a record from a large table which is alpha-fraction full is at least 1/alpha log 1/1-alpha + o(1). This proves Ullman's conjecture to be true in the asymptotic sense.
CS-TR-85-1038

Report Number: CS-TR-85-1043
Institution: Stanford University, Department of Computer Science
Title: Constructing a perfect matching is in Random NC
Author: Karp, Richard M.
Author: Upfal, Eli
Author: Wigderson, Avi
Date: March 1985
Abstract: We show that the problem of constructing a perfect matching in a graph is in the complexity class Random NC: i.e., the problem is solvable in polylog time by a randomized parallel algorithm using a polynomial-bounded number of processors. We also show that several related problems lie in Random NC. These include: (i) Constructing a perfect matching of maximum weight in a graph whose edge weights are given in unary notation; (ii) Constructing a maximum-cardinality matching; (iii) Constructing a matching covering a set of vertices of maximum weight in a graph whose vertex weights are given in binary; (iv) Constructing a maximum s-t flow in a directed graph whose edge weights are given in unary.
CS-TR-85-1043

Report Number: CS-TR-85-1047
Institution: Stanford University, Department of Computer Science
Title: Smooth, easy to compute interpolating splines
Author: Hobby, John D.
Date: January 1985
Abstract: We present a system of interpolating splines with first and approximate second order geometric continuity. The curves are easily computed in linear time by solving a system of linear equations without the need to resort to any kind of successive approximation scheme. Emphasis is placed on the need to find aesthetically pleasing curves in a wide range of circumstances; favorable results are obtained even when the knots are very unequally spaced or widely separated. The curves are invariant under scaling, rotation, and reflection, and the effects of a local change fall off exponentially as one moves away from the disturbed knot. Approximate second order continuity is achieved by using a linear "mock curvature" function in place of the actual endpoint curvature for each spline segment and choosing tangent directions at knots so as to equalize these. This avoids extraneous solutions and other forms of undesirable behavior without seriously compromising the quality of the results. The actual spline segments can come from any family of curves whose endpoint curvatures can be suitably approximated, but we propose a specific family of parametric cubics. There is freedom to allow tangent directions and "tension" parameters to be specified at knots, and special "curl" parameters may be given for additional control near the endpoints of open curves.
CS-TR-85-1047

Report Number: CS-TR-85-1048
Institution: Stanford University, Department of Computer Science
Title: Some constructions for order-theoretic models of concurrency
Author: Pratt, Vaughan
Date: March 1985
Abstract: We give "tight" and "loose" constructions suitable for specifying processes represented as sets of pomsets (partially ordered multisets). The tight construction is suitable for specifying "primitive" processes; it introduces the dual notions of concurrence and orthocurrence. The loose construction specifies a process in terms of a net of communicating subprocesses; it introduces lhe notion of a utilization embedding a process in a net.
CS-TR-85-1048

Report Number: CS-TR-85-1049
Institution: Stanford University, Department of Computer Science
Title: The pomset model of parallel processes: unifying the temporal and the spatial
Author: Pratt, Vaughan
Date: January 1985
Abstract: No abstract.
CS-TR-85-1049

Report Number: CS-TR-85-1050
Institution: Stanford University, Department of Computer Science
Title: Fast sequential algorithms to find shuffle-minimizing and shortest paths in a shuffle-exchange network
Author: Hershberger, John
Author: Mayr, Ernst
Date: May 1985
Abstract: This paper analyzes the problem of finding shortest paths and shuffle-minimizing paths in an n-node shuffle-exchange network, where n = $2^m$. Such paths have the properties needed by the Valiant-Brebner permutation routing algorithm, unlike the trivial (m - 1)-shuffle paths usually used for shuffle-exchange routing. The Valiant-Brebner algorithm requires n simultaneous route computations, one for each packet to be routed, which can be done in parallel. We give fast sequential algorithms for both problems we consider. Restricting the shortest path problem to allow only paths that use fewer than m shuffles provides intuition applicable to the general problem. Linear-time pattern matching techniques solve part of the restricted problem; as a consequence, a path using fewest shuffles can be found in O(m) time, which is optimal up to a constant factor. The shortest path problem is equivalent to the problem of finding the Hamming distances between a bitstring and all shifted instances of another. An application of the fast Fourier transform solves this problem and the shortest path problem in O(m log m) time.
CS-TR-85-1050

Report Number: CS-TR-85-1051
Institution: Stanford University, Department of Computer Science
Title: Special relations in automated deduction
Author: Manna, Z ohar
Author: Waldinger, Richard
Date: May 1985
Abstract: Two deduction rules are introduced to give streamlined treatment to relations of special importance in an automated theorem-proving system. These rules, the relation replacement and relation matching rules, generalize to an arbitrary binary relation the paramodulation and E-resolution rules, respectively, for equality, and may operate within a nonclausal or clausal system. The new rules depend on an extension of the notion of polarity to apply to subterms as well as to subsentences, with respect to a given binary relation. The rules allow us to eliminate troublesome axioms, such as transitivity and monotonicity, from the system; proofs are shorter and more comprehensible, and the search space is correspondingly deflated.
CS-TR-85-1051

Report Number: CS-TR-85-1053
Institution: Stanford University, Department of Computer Science
Title: Transaction classification to survive a network partition
Author: Apers, Peter M. G.
Author: Wiederhold, Gio
Date: August 1984
Abstract: When comparing centralized and distributed databases one of the advantages of distributed databases is said to be the greater availability of the data. Availability is defined as having access to the stored data for update and retrieval, even when some distributed sites are down due to hardware failures. We will investigate the functioning of a distributed database of which the underlying computer network may fail. A classification of transactions is given to allow an implementation of different levels of operatability. Some transactions can be guaranteed to commit in spite of a network partition, while others have to wait until the state of potential transactions in the other partitions is also known. An algorithm is given to compute a classification. Based on historics of transactions kept in the different partitions a merge of histories is computed, generating the new values for some data items when communication is re-established. Thc algorithm to compute the merge of the histories makes use of a knowledge base containing knowledge about the transactions, to decide whether to merge, delete, or delay a transaction.
CS-TR-85-1053

Report Number: CS-TR-85-1055
Institution: Stanford University, Department of Computer Science
Title: A Programming and Problem-Solving Seminar
Author: Knuth, Donald E.
Date: June 1985
Abstract: This report contains edited transcripts of the discussions held in Stanford's course CS204, Problem Seminar, during winter quarter 1985. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be of interest to graduate students of computer science at other universities, as well as to their professors and to professional people in the "real world." The present report is the sixth in a series of such transcripts, continuing the tradition established in STAN-CS-77-606 (Michael J. Clancy, 1977), STAN-CS-79-707 (Chris Van Wyk, 1979), STAN-CS-81-863 (Allan A. Miller, 1981), STAN-CS-83-989 (Joseph S. Weening, 1983), STAN-CS-83-990 (John D. Hobby, 1983).
CS-TR-85-1055

Report Number: CS-TR-85-1056
Institution: Stanford University, Department of Computer Science
Title: Nonclausal temporal deduction
Author: Manna, Z ohar
Date: June 1985
Abstract: We present a proof system for propositional temporal logic. This system is based on nonclausal resolution; proofs are natural and generally short. Its extension to first-order temporal logic is considered. Two variants of the system are described. The first one is for a logic with $\Box$ ("always"), $\Diamond$ ("sometime"), and $\bigcirc$ ("next"). The second variant is an extension of the first one to a logic with the additional operators U ("until") and P ("precedes"). Each of these variants is proved complete.
CS-TR-85-1056

Report Number: CS-TR-85-1058
Institution: Stanford University, Department of Computer Science
Title: Host groups: a multicast extension for datagram internetworks
Author: Cheriton, David R.
Author: Deering, Stephen E.
Date: July 1985
Abstract: The extensive use of local networks is beginning to drive requirements for internetwork facilities that connect these local networks. In particular, the availability of multicast addressing in many local networks and its use by sophisticated distributed applications motivates providing multicast across internetworks. In this paper, we propose a model of service for multicast in an internetwork, describe how this service can be used, and describe aspects of its implementation, including how it would fit into one existing internetwork architecture, namely the US DoD Internet Architecture.
CS-TR-85-1058

Report Number: CS-TR-85-1062
Institution: Stanford University, Department of Computer Science
Title: Computer Science comprehensive examinations, 1981/82-1984/85
Author: Keller, Arthur M.
Date: August 1985
Abstract: This report is a collection of the eight comprehensive examinations from Winter 1982 through Spring 1985 prepared by the faculty and students of Stanford's Computer Science Department together with solutions to the problems posed.
CS-TR-85-1062

Report Number: CS-TR-85-1065
Institution: Stanford University, Department of Computer Science
Title: Review of Sowa's "Conceptual Structures"
Author: Clancey, William J.
Date: March 1985
Abstract: "Conceptual Structures" is a bold, provocative synthesis of logic, linguistics, and Artificial Intelligence research. At the very least, Sowa has provided a clean, well-grounded notation for knowledge representation that many researchers will want to emulate and build upon. At its best, Sowa's notation and proofs hint at what a future Principia Mathematica of knowledge and reasoning may look like. No other AI text achieves so much in breadth, style, and mathematical precision. This is a book that everyone in AI and cognitive science should know about, and that experienced researchers will profit from studying in some detail.
CS-TR-85-1065

Report Number: CS-TR-85-1066
Institution: Stanford University, Department of Computer Science
Title: Heuristic classification
Author: Clancey, William J.
Date: June 1985
Abstract: A broad range of well-structured problems--embracing forms of diagnosis, catalog selection, and skeletal planning--are solved in "expert systems" by the method of heuristic classification. These programs have a characteristic inference structure that systematically relates data to a pre-enumerated set of solutions by abstraction, heuristic association, and refinement. In contrast with previous descriptions of classification reasoning, particularly in psychology, this analysis emphasizes the role of a heuristic in routine problem solving as a non-hierarchical, direct association between concepts. In contrast with other descriptions of expert systems, this analysis specifies the knowledge needed to solve a problem, independent of its representation in a particular computer language. The heuristic classification problem-solving model provides a useful framework for characterizing kinds of problems, for designing representation tools, and for understanding non-classification (constructive) problem-solving methods.
CS-TR-85-1066

Report Number: CS-TR-85-1067
Institution: Stanford University, Department of Computer Science
Title: Acquiring, representing, and evaluating a competence model of diagnostic strategy
Author: Clancey, William J.
Date: August 1985
Abstract: NEOMYCIN is a computer program that models one physician's diagnostic reasoning within a limited area of medicine. NEOMYCIN's diagnostic procedure is represented in a well-structured way, separately from the domain knowledge it operates upon. We are testing the hypothesis that such a procedure can be used to simulate both expert problem-solving behavior and a good teacher's explanations of reasoning. The model is acquired by protocol analysis, using a framework that separates an expert's causal explanations of evidence from his descriptions of knowledge relations and strategies. The model is represented by a procedural network of goals and rules that are stated in terms of the effect the problem solver is trying to have on his evolving model of the world. The model is evaluated for sufficiency by testing it in different settings requiring expertise, such as providing advice and teaching. The model is evaluated for plausibility by arguing that the constraints implicit in the diagnostic procedure are imposed by the task domain and human computational capability. This paper discusses NEOMYCIN's diagnostic procedure in detail, viewing it as a memory aid, as a set of operators, as proceduralized constraints, and as a grammar. This study provides new perspectives on the nature of "knowledge compilation" and how an expert-teacher's explanations relate to a working program.
CS-TR-85-1067

Report Number: CS-TR-85-1068
Institution: Stanford University, Department of Computer Science
Title: GUIDON-WATCH: a graphic interface for viewing a knowledge-based system
Author: Richer, Mark H.
Author: Clancey, William J.
Date: August 1985
Abstract: This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. Methods are presented for providing multiple views of hierarchical structures, overlaying results of a search process on top of static structures to make the strategy visible, and graphically expressing evidence relations between findings and hypotheses. This work demonstrates the advantages of stating a diagnostic search procedure in a well-structured, rule-based language, separate from domain knowledge. A number of issues in software design are also considered, including the automatic management of a multiple-window display.
CS-TR-85-1068

Report Number: CS-TR-85-1072
Institution: Stanford University, Department of Computer Science
Title: A Programming and Problem-Solving Seminar
Author: Mayr, Ernst W.
Author: Anderson, Richard J.
Author: Hochschild, Peter H.
Date: October 1985
Abstract: This report contains edited transcripts of the discussions held in Stanford's course CS204, Problem Seminar, during winter quarter 1984. The course topics consisted of five problems coming from different areas of computer science. The problems were discussed in class and solved and programmed by the students working in teams.
CS-TR-85-1072

Report Number: CS-TR-85-1074
Institution: Stanford University, Department of Computer Science
Title: Designing new typefaces with Metafont
Author: Southall, Richard
Date: September 1985
Abstract: The report discusses issues associated with the symbolic design of new typefaces using programming languages such as Metafont. A consistent terminology for the subject area is presented. A schema for type production systems is described that lays stress on the importance of communication between the designer of a new typeface and the producer of the fonts that embody it. The methods used for the design of printers' type from the sixteenth century to the present day are surveyed in the context of this schema. The differences in the designer's task in symbolic and graphic design modes are discussed. A new typeface design made with Metafont is presented, and the usefulness of Metafont as a tool for making new designs considered.
CS-TR-85-1074

Report Number: CS-TR-85-1075
Institution: Stanford University, Department of Computer Science
Title: Expert systems: working systems and the research literature
Author: Buchanan, Bruce G.
Date: October 1985
Abstract: Expert systems are the subject of considerable interest among persons in AI research or applications. There is no single definition of an expert system, and thus no precisely defined set of programs or set of literature references that represent work on expert systems. This report provides (a) a characterization of what an expert systems is, (b) a list of expert systems in routine use or field testing, and (c) a list of relevant references in the AI research literature.
CS-TR-85-1075

Report Number: CS-TR-85-1076
Institution: Stanford University, Department of Computer Science
Title: Some approaches to knowledge acquisition
Author: Buchanan, Bruce G.
Date: July 1985
Abstract: Knowledge acquisition is not a single, monolithic problem for AI. There are many ways to approach the topic in order to understand issues and design useful tools for constructing knowledge-based systems. Several of those approaches are being explored in the Knowledge Systems Laboratory (KSL) at Stanford.
CS-TR-85-1076

Report Number: CS-TR-85-1079
Institution: Stanford University, Department of Computer Science
Title: Two processor scheduling is in NC
Author: Helmbold, David
Author: Mayr, Ernst
Date: October 1985
Abstract: We present a parallel algorithm for the two processor scheduling problem. This algorithm constructs an optimal schedule for unit execution time task systems with arbitrary precedence constraints using a polynomial number of processors and running in time polylog in the size of the input. Whereas previous parallel solutions for the problem made extensive use of randomization, our algorithm is completely deterministic and based on an interesting decomposition technique. And it is of independent relevance for two more reasons. It provides another example for the apparent difference in complexity between decision and search problems in the context of fast parallel computation, and it gives an NC-algorithm for the matching problem in certain restricted cases.
CS-TR-85-1079

Report Number: CS-TR-85-1084
Institution: Stanford University, Department of Computer Science
Title: Taliesin: a distributed bulletin board system
Author: Edighoffer, Judy L.
Author: Lantz, Keith A.
Date: September 1985
Abstract: This paper describes a computer bulletin board facility intended to support replicated bulletin boards on a network that may frequently be in a state of partition. The two major design issues covered are the choice of a name space and the choice of replication algorithms. The impact of the name space on communication costs is explained. A special purpose replication algorithm that provides high availability and response despite network partition is introduced.
CS-TR-85-1084

Report Number: CS-TR-85-1086
Institution: Stanford University, Department of Computer Science
Title: Towards a universal directory service
Author: Lantz, Keith A.
Author: Edighoffer, Judy L.
Author: Hitson, Bruce L.
Date: August 1985
Abstract: Directory services and name servers have been discussed and implemented for a number of distributed systems. Most have been tightly interwoven with the particular distributed systems of which they are a part; a few are more general in nature. In this paper we survey recent work in this area and discuss the advantages and disadvantages of a number of approaches. From this, we are able to extract some fundamental requirements of a naming system capable of handling a wide variety of object types in a heterogeneous environment. We outline how these requirements can be met in a universai directory service.
CS-TR-85-1086

Report Number: CS-TR-85-1087
Institution: Stanford University, Department of Computer Science
Title: Preemptable remote execution facilities for the V-system
Author: Theimer, Marvin M.
Author: Lantz, Keith A.
Author: Cheriton, David R.
Date: September 1985
Abstract: A remote execution facility allows a user of a workstation-based distributed system to offload programs onto idle workstations, thereby providing the user with access to computational resources beyond that provided by his personal workstation. In this paper, we describe the design and performance of the remote execution facility in the V distributed system, as well as several implementation issues of interest. In particular, we focus on network transparency of the execution environment, preemption and migration of remotely executed programs, and avoidance of residual dependencies on the original host. We argue that preemptable remote execution allows idle workstations to be used a a "pool of processors" without interfering with use by their owners and without significant overhead for the normal execution of programs. In general, we conclude that the cost of providing preemption is modest compared to providing a similar amount of computation service by dedicated "computation engines".
CS-TR-85-1087

Report Number: CS-TR-85-1080
Institution: Stanford University, Department of Computer Science
Title: The compleat guide to MRS
Author: Russell, Stuart
Date: June 1985
Abstract: MRS is a logic programming system with extensive meta-level facilities. As such it can be used to implement virtually all kinds of artificial intelligence applications in a wide variety of architectures. This guide is intended to be a comprehensive text and reference for MRS. It also attempts to explain the foundations of the logic programming approach from the ground up, and it is hoped that it will thus provide access, even for the uninitiated, to all the benefits of AI methods. The only prerequisites for understanding MRS are a passing acquaintance with LISP and an open mind.
CS-TR-85-1080

Report Number: CS-TR-86-1085
Institution: Stanford University, Department of Computer Science
Title: Bibliography of Computer Science reports, 1963-1986
Author: Berg, Kathryn A.
Author: Marashian, Taleen
Date: June 1986
Abstract: This report lists, in chronological order, all reports published by the Stanford Computer Science Department since 1963. Each report is identified by a Computer Science number, author's name, title, National Technical Information Service (NTIS) retrieval number (i.e., AD-XXXXXX), date, and number of pages.
CS-TR-86-1085

Report Number: CS-TR-86-1093
Institution: Stanford University, Department of Computer Science
Title: A general reading list for artificial intelligence
Author: Subramanian, Devika
Author: Buchanan, Bruce G.
Date: December 1985
Abstract: This reading list is based on thc syllabus for the course CS229b offered in Winter 1985. This course was an intensive 10 week survey intended as preparation for the 1984-85 qualifying examination in Artificial Intelligence at Stanford University.
CS-TR-86-1093

Report Number: CS-TR-86-1094
Institution: Stanford University, Department of Computer Science
Title: Expert systems: working systems and the research literature
Author: Buchanan, Bruce G.
Date: December 1985
Abstract: Many expert systems have moved out of development laboratories into field test and routine use. About sixty such systems are listed. Academic research laboratories are contributing manpower to fuel the commercial development of AI. But the quantity of AI research may decline as a result unless the applied systems are experimented with and analyzed.
CS-TR-86-1094

Report Number: CS-TR-86-1095
Institution: Stanford University, Department of Computer Science
Title: A torture test for METAFONT
Author: Knuth, Donald E.
Date: January 1986
Abstract: Programs that claim to be implementations of METAFONT84 are supposed to be able to process the test routine contained in this report, producing the outputs contained in this report.
CS-TR-86-1095

Report Number: CS-TR-86-1096
Institution: Stanford University, Department of Computer Science
Title: A model-theoretic approach to updating logical databases
Author: Wilkins, Marianne Winslett
Date: January 1986
Abstract: We show that it is natural to extend the concept of database updates to encompass databases with incomplete information. Our approach embeds the incomplete database and the updates in the language of first-order logic, which we believe has strong advantages over relational tables and traditional data manipulation languages in the incomplete information situation. We present semantics for our update operators, and also provide an efficient algorithm to perform the operations.
CS-TR-86-1096

Report Number: CS-TR-86-1097
Institution: Stanford University, Department of Computer Science
Title: TEXware.
Author: Knuth, Donald E.
Date: April 1986
Abstract: This report documents four TEX utility programs: The POOLtype processor (Version 2, July 1983), The TFtoPL processor (Version 2.5, September 1985), The PLtoTF processor (Version 2.3, August 1985), and The DVItype processor (Version 2.8, August 1984).
CS-TR-86-1097

Report Number: CS-TR-86-1100
Institution: Stanford University, Department of Computer Science
Title: Model theorem proving
Author: Manna, Z ohar
Date: May 1986
Abstract: We describe resolution proof systems for several modal logics. First we present the propositional versions of the systems and prove their completeness. The first-order resolution rule for classical logic is then modified to handle quantifiers directly. This new resolution rule enables us to extend our propositional systems to complete first-order systems. The systems for the different modal logics are closely related.
CS-TR-86-1100

Report Number: CS-TR-86-1102
Institution: Stanford University, Department of Computer Science
Title: Data independent recursion in deductive databases
Author: Naughton, Jeffrey F.
Date: February 1986
Abstract: Some recursive definitions in deductive database systems can be replaced by equivalent nonrecursive definitions. In this paper we give a linear-time algorithm that detects many such definitions, and specify a useful subset of recursive definitions for which the algorithm is complete. It is unlikely that our algorithm can be extended significantly, as recent results by Gaifman [5] and Vardi [19] show that the general problem is undecidable. We consider two types of initialization of the recursively defined relation: arbitrary initialization, and initialization by a given nonrecursive rule. This extends earlier work by Minker and Nicolas [10], and by Ioannidis [7], and is related to bounded tableau results by Sagiv [14]. Even if there is no equivalent equivalent nonrecursive definition, a modification of our algorithm can be used to optimize a recursive definition and improve the efficiency of the compiled evaluation algorithms proposed in Henschen and Naqvi [6] and in Bancilhon et al. [3].
CS-TR-86-1102

Report Number: CS-TR-86-1104
Institution: Stanford University, Department of Computer Science
Title: CS229b: a survey of AI classnotes for Winter 84-85
Author: Subramanian, Devika
Date: April 1986
Abstract: These are the compiled classnotes for the course CS229b offered in Winter 1985. This course was an intensive 10 week survey intended as preparation for the 1984-85 qualifying examination in Artificial Intelligence at Stanford University.
CS-TR-86-1104

Report Number: CS-TR-86-1105
Institution: Stanford University, Department of Computer Science
Title: Software-controlled caches in the VMP multiprocessor
Author: Cheriton, David R.
Author: Slavenburg, Gert A.
Author: Boyle, Patrick D.
Date: March 1986
Abstract: VMP is an experimental multiprocessor that follows the familiar basic design of multiple processors, each with a cache, connected by a shared bus to global memory. Each processor has a synchronous, virtually addressed, single master connection to its cache, providing very high memory bandwidth. An unusually large cache page size and fast sequential memory copy hardware make it feasible for cache misses to be handled in software, analogously to the handling of virtual memory page faults. Hardware support for cache consistency is limited to a simple state machine that monitors the bus and interrupts the processor when a cache consistency action is required. In this paper, we show how the VMP design provides the high memory bandwidth required by modern high-performance processors with a minimum of hardware complexity and cost. We also describe simple solutions to the consistency problems associated with virtually addressed caches. Simulation results indicate that the design achieves good performance providing data contention is not excessive.
CS-TR-86-1105

Report Number: CS-TR-86-1106
Institution: Stanford University, Department of Computer Science
Title: A timely resolution
Author: Manna, Z ohar
Date: April 1986
Abstract: We present a novel proof system R for First-order (Linear) Temporal Logic. This system extends our Propositional Temporal Logic proof system ([AM]). The system R is based on nonclausal resolution; proofs are natural and generally short. Special quantifier rules, unification techniques, and a resolution rule are introduced. We relate R to other proof systems for First-order Temporal Logic and discuss completeness issues. The system R should be useful as a tool for such tasks as verification of concurrent programs and reasoning about hardware devices.
CS-TR-86-1106

Report Number: CS-TR-86-1109
Institution: Stanford University, Department of Computer Science
Title: A proof editor for propositional temporal logic
Author: Casley, Ross
Date: May 1986
Abstract: This report describes PTL, a program to assist in constructing proofs in propositional logic extended by the operators $\Box$ ("always"), $\Diamond$ ("eventually") and $\bigcirc$ ("at the next step"). This is called propositional temporal logic and is one of two systems of logic presented by Abadi and Manna in [1].
CS-TR-86-1109

Report Number: CS-TR-86-1114
Institution: Stanford University, Department of Computer Science
Title: Optimizing function-free recursive inference rules
Author: Naughton, Jeffrey F.
Date: May 1986
Abstract: Recursive inference rules arise in recursive definitions in logic programming systems and in database systems with recursive query languages. Let D be a recursive definition of a relation t. We say that D is minimal if for any predicate p in a recursive rule in D, p must appear in a recursive rule in any definition of t. We show that testing for minimality is in general undecidable. However, we do present an efficient algorithm for a useful class of recursive rules, and show how to use it to transform a recursive definition to a minimal recursive definition. Evaluating the optimized definition will avoid redundant computation without the overhead of caching intermediate results and run-time checking for duplicate goals.
CS-TR-86-1114

Report Number: CS-TR-86-1115
Institution: Stanford University, Department of Computer Science
Title: The heuristic refinement method for deriving solution structures of proteins
Author: Buchanan, Bruce G.
Author: Hayes-Roth, Barbara
Author: Lichtarge, Olivier
Author: Altman, Russ
Author: Brinkley, James
Author: Hewett, Micheal
Author: Cornelius, Craig
Author: Duncan, Bruce
Author: Jardetzky, Oleg
Date: March 1986
Abstract: A new method is presented for determining structures of proteins in solution. The method uses constraints inferred from analytic data to successively refine both the locations for parts of the structure and the levels of detail for describing those parts. A computer program, called PROTEAN, which encodes this method, has been partially implemented and was used to derive structures for the lac-repressor headpiece from experimental data.
CS-TR-86-1115

Report Number: CS-TR-86-1116
Institution: Stanford University, Department of Computer Science
Title: Inductive knowledge acquisition for rule-based expert systems
Author: Fu, Li-Min
Author: Buchanan, Bruce G.
Date: October 1985
Abstract: The RL program was developed to construct knowledge bases automatically in rule-based expert systems, primarily in MYCIN-like evidence-gathering systems where there is uncertainty about data as well as the strength of inference, and where rules are chained together or combined to infer complex hypotheses. This program comprises three subprograms: (1) a program that learns confirming rules, which employs a heuristic search commencing with the most general hypothesis; (2) a subprogram that learns rules containing intermediate concepts, which exploits the old partial knowledge or defines new intermediate concepts, based on heuristics; (3) a program that learns disconfirming rules, which is based on the expert's heuristics to formulate disconfirming rules. RL's validity has been demonstrated with a performance program that diagnoses the causes of jaundice.
CS-TR-86-1116

Report Number: CS-TR-86-1117
Institution: Stanford University, Department of Computer Science
Title: An Empirical Study of Distributed Application Performance
Author: Lantz, Keith
Author: Nowicki, William
Author: Theimer, Marvin
Date: October 1985
Abstract: A major reason for the rarity of distributed applications, despite the proliferation of networks, is the sensitivity of their performance to various aspects of the network environment. We demonstrate that distributed applications can run faster than local ones, using common hardware. We also show that the primary factors affecting performance are, in approximate order of importance: speed of the user's workstation, speed of the remote host (if any), and the high-level (above the transport level) protocols used. In particular, the use of batching pipelining, and structure in high-level protocols reduces the degradation often experienced between different bandwidth networks. Less significant, but still noticeable improvements result from proper design and implementation of underlying transport protocols. Ultimately, with proper application of these techniques, network bandwidth is rendered virtually insignificant.
CS-TR-86-1117

Report Number: CS-TR-86-1118
Institution: Stanford University, Department of Computer Science
Title: Applications of Parallel Scheduling to Perfect Graphs
Author: Helmbold, David
Author: Mayr, Ernst
Date: June 1986
Abstract: We combine a parallel algorithm for the two processor scheduling problem, which runs in polylog time on a polynomial number of processors, with an algorithm to find transitive orientations of graphs where they exist. Both algorithms together solve the maximum clique problem and the minimum coloring problem for comparability graphs, and the maximum matching problem for co-comparability graphs. These parallel algorithms can also be used to identify permutation graphs and interval graphs, important subclasses of perfect graphs.
CS-TR-86-1118

Report Number: CS-TR-86-1119
Institution: Stanford University, Department of Computer Science
Title: Simulation of an Ultracomputer with Several 'Hot Spots'
Author: Rosenblum, David S.
Author: Mayr, Ernst W.
Date: June 1986
Abstract: This report describes the design and results of a time-driven simulation of an Ultracomputer-like multiprocessor in the presence of several "hot spots," or memory modules which are frequent targets of requests. Such hot spots exist during execution of parallel programs in which the several threads of control synchronize through manipulation of a small number of shared variables. The simulated system is comprised of N processing elements (PEs) and N shared memory modules connected by an N x N buffered, packet-switched Omega network. The simulator was designed to accept a wide variety of system configurations to enable observation of many different characteristics of the system behavior. We present the results of four experiments: (1) General simulation of several 16-PE configurations, (2) General simulation of several 512-PE configurations, (3) Determination of critical queue lengths as a function of request rate (512 PEs) and (4) Determination of the effect of hot spot spacing on system performance (512 PEs).
CS-TR-86-1119

Report Number: CS-TR-86-1123
Institution: Stanford University, Department of Computer Science
Title: Blackboard Systems
Author: Nii, H. Penny
Date: June 1986
Abstract: The first blackboard system was the HEARSAY-II speech understanding system that evolved between 1971 and 1976. Subsequently, many systems have been built that have similar system organizations and run-time behavior. The objectives of this document are: (1) to define what is meant by "blackboard systems," and (2) to show the richness and diversity of blackboard system designs. The article begins with a discussion of the underlying concept behind all blackboard systems, the blackboard model of problem solving. In order to bridge the gap between a model and working systems, the blackboard framework, an extension of the basic blackboard model is introduced, including a detailed description of the model's components and their behavior. A model does not come into existence on its own and is usually an abstraction of many examples. In section 2, the history of ideas is traced and the designs of some applications systems that helped shape the blackboard model are detailed. We then describe and contrast existing blackboard systems. Blackboard systems can generally be divided into two categories; application and skeletal systems. In application systems the blackboard system components are integrated with the domain knowledge required to solve the problem at hand. Skeletal systems are devoid of domain knowledge, and, as the name implies, consist of the essential system components from which application systems can be built by the addition of knowledge and the specification of control (i.e. meta-knowledge). Application systems will be discussed in Section 3, and skeletal systems will be discussed elsewhere. In Section 3.6, we summarize the features of the applications systems and in Section 4 present the author's perspective on the utility of the blackboard approach to problem solving and knowledge engineering.
CS-TR-86-1123

Report Number: CS-TR-86-1124
Institution: Stanford University, Department of Computer Science
Title: Efficient Matching Algorithms for the SOARIOPSS Production System
Author: Scales, Daniel J.
Date: June 1986
Abstract: SOAR is a problem-solving and learning program intended to exhibit intelligent behavior. SOAR uses a modified form of the OPS5 production system for storage of and access to long-term knowledge. As with most programs which use production system systems, the match phase of SOAR's production system dominates all other SOAR processing. This paper describes the results of an investigation of various ways of speeding up the matching process in SOAR through additions and changes to the OPS5 matching algorithm.
CS-TR-86-1124

Report Number: CS-TR-86-1125
Institution: Stanford University, Department of Computer Science
Title: The CAOS System
Author: Schoen, Eric
Date: March 1986
Abstract: The CAOS system is a framework designed to facilitate the development of highly concurrent real-time signal interpretation applications. It explores the potential of multiprocessor architectures to improve the performance of expert systems in the domain of signal interpretation. CAOS is implemented in Lisp on a (simulated) collection of processor-memory sites, linked by a high-speed communiications subsystem. The "virtual machine" on which it depends provides remote evaluation and packet-based message exchange between processes, using virtual circuits known as streams. To this presentation layer, CAOS adds (1) a flexible process scheduler, and (2) an object-centered notion of agents, dynamically-instantiable entities which model interpreted signal features. This report documents the principal ideas, programming model, and implementation of CAOS. A model of real-time signal interpretation, based on replicated "abstraction" pipelines, is presented. For some applications, this model offers a means by which large numbers of processors may be utilized without introducing synchronization-necessitated software bottlenecks. The report concludes with a description of the performance of a large CAOS application over various sizes of multiprocessor configurations. Lessons about problem decomposition grain size, global problem solving control strategy, and appropriate service provided to CAOS by the underlying architecture are discussed.
CS-TR-86-1125

Report Number: CS-TR-86-1126
Institution: Stanford University, Department of Computer Science
Title: CAREL: A Visible Distributed Lisp
Author: Davies, Byron
Date: March 1986
Abstract: CAREL is a Lisp implementation designed to be a high-level interactive systems programming language for a distributed-memory multiprocessor. CAREL insulates the user from the machine language of the multiprocessor architecture, but still makes it possible for the user to specify explicitly the assignment of tasks to processors in the multiprocessor network. CAREL has been implemented to run on a TI Explorer Lisp machine using Stanford's CARE multiprocessor simulator. CAREL is more than a language: real-time graphical displays provided by the CARE simulator make CAREL a novel graphical programming environment for distributed computing. CAREL enables the user to create programs interactively and then watch them run on a network of simulated processors. As a CAREL program executes, the CARE simulator graphically displays the activity of the processors and the transmission of data through the network. Using this capability, CAREL has demonstrated its utility as an educational tool for multiprocessor computing.
CS-TR-86-1126

Report Number: CS-TR-86-1129
Institution: Stanford University, Department of Computer Science
Title: Beta Operations: Efficient Implementation of a Primitive Parallel Operation
Author: Cohn, Evan R.
Date: August 1986
Abstract: We will consider the primitive parallel operation of the Connection Machine, the Beta Operation. Let the imput size of the problem be N and output size M. We will show how to perforrn the Beta Operation on an N-node hypercube in O(log N + $log^2$ M) time. For a $\sqrt{N} x \sqrt{M}$ mesh-of-trees, we require O(log N + $\sqrt{M}$) time.
CS-TR-86-1129

Report Number: CS-TR-86-1131
Institution: Stanford University, Department of Computer Science
Title: Processor Renaming in Asynchronous Environments
Author: Bar-Noy, Amotz
Author: Peleg, David
Date: September 1986
Abstract: Fischer, Lynch and Paterson proved that in a completely asynchronous system "weak agreement" cannot be achieved even in the presence of a single "benign" fault. Following the direction proposed in Attiya, Bar-Noy, Dolev and Koller (Aug 1986), we demonstrate the interesting fact that some weaker forms of processor cooperation are still achievable in such a situation, and in fact, even in the presence of up to t < n/2 such faulty processors. In particular, we show that n processors, each having a distinct name taken from an unbounded ordered domain, can individually choose new distinct names from a space of size n + t (where n is an obvious lower bound). In case the new names are required also to preserve the original order, we give an algorithm in which the space of new names is of size ${2^t}(n - t + 1) - 1$, which is tight.
CS-TR-86-1131

Report Number: CS-TR-86-1132
Institution: Stanford University, Department of Computer Science
Title: Optimizing Datalog Programs
Author: Sagiv, Yehoshua
Date: March 1986
Abstract: Datalog programs, i.e., Prolog programs without function symbols, are considered. It is assumed that a variable appearing in the head of a rule must also appear in the body of the rule. The input of a program is a set of ground atoms (which are given in addition to the program's rules) and, therefore, can be viewed as an assignment of relations to some of the program's predicates. Two programs are equivalent if they produce the same result for all possible assignments of relations to the extensional predicates (i.e., the predicates that do not appear as heads of rules). Two programs are uniformly equivalent if they produce the same result for all possible assignments of initial relations to all the predicates (i.e. both extensional and intentional). The equivalence problem for Datalog programs is known to be undecidable. It is shown that uniform equivalence is decidable, and an algorithm is given for minimizing a Datalog program under equivalence. A technique for removing parts of a program that are redundant under equivalence (but not under uniform equivalence) is developed. A procedure for testing uniform equivalence is also developed for the case in which the database satisfies some constraints.
CS-TR-86-1132

Report Number: CS-TR-86-1134
Institution: Stanford University, Department of Computer Science
Title: UIO: A Uniform I/O System Interface for Distributed Systems
Author: Cheriton, David R.
Date: November 1986
Abstract: A uniforrn I/O interface allows programs to be written relatively independent of specific I/O services and yet work with a wide variety of the I/O services available in a distributed environment. Ideally, the interface provides this uniform access without excessive complexity in the interface or loss of performance. However, a uniform interface does not arise from careful design of individual system interfaces alone; it requires explicit definition. In this paper, we describe the UIO (uniform I/O) system interface that has been used for the past five years in the V distributed operating systems, focusing on the key design issues. This interface provides several extensions beyond the I/O interface of UNIX, including support for record I/O, locking, atomic transactions and replications as well as attributes that indicate whether optional semantics and operations are available. We also describe our experience in using and implementing this interface with a variety of different I/O services plus the performance of both local and network I/O. We conclude that the UIO interface provides a uniform I/O system interface with significant functionality, wide applicability and no significant performance penalty.
CS-TR-86-1134

Report Number: CS-TR-86-1136
Institution: Stanford University, Department of Computer Science
Title: An Experiment in Knowledge-based Signal Understanding Using Parallel Architectures
Author: Brown, Harold
Author: Schoen, Eric
Author: Delagi, Bruce
Date: October 1986
Abstract: This report documents an experiment investigating the potential of a parallel computing architecture to enhance the performance of a knowledge-based signal understanding system. The experiment consisted of implementing and evaluating an application encoded in a parallel programming extension of Lisp and executing on a simulated multiprocessor system. The chosen application for the experiment was a knowledge-based system for interpreting pre-processed, passively acquired radar emissions from aircraft. The application was implemented in an experimental concurrent, asynchronous object-oriented framework. This framework, in turn, relied on the services provided by the underlying hardware system. The hardware system for the experiment was a simulation of various sized grids of processors with inter-processor communication via message-passing. The experiment investigated the effects of various high-level control strategies on the quality of the problem solution, the speedup of the overall system performance as a function of the number of processors in the grid, and some of the issues in implementing and debugging a knowledge-based system on a message-passing multiprocessor system. In this report we describe the software and (simulated) hardware components of the experiment and present the qualitative and quantitative experimental results.
CS-TR-86-1136

Report Number: CS-TR-86-1137
Institution: Stanford University, Department of Computer Science
Title: The Leaf File Access Protocol
Author: Mogul, Jeffrey
Date: December 1986
Abstract: Personal computers are superior to timesharing systems in many ways, but they are inferior in this respect: they make it harder for users to share files. A local area network provides a substrate upon which file sharing can be built; one must also have a protocol for sharing files. This report describes Leaf, one of the first protocols to allow remote access to files. Leaf is a remote file access protocol rather than a file transfer protocol. Unlike a file transfer protocol, which must create a complete copy of a file, a file access protocol provides random access directly to the file itself. This promotes sharing because it allows simultaneous access to a file by several remote users, and because it avoids the creation of new copies and the associated consistency-maintenance problem. The protocol described in this report is nearly obsolete. It is interesting for historical reasons, primarily because it was perhaps the first non-proprietary remote file access protocol actually implemented, and also because it serves as a case study in practical protocol design. The specification of Leaf is included as an appendix; it has not been widely available outside of Stanford.
CS-TR-86-1137

Report Number: CS-TR-86-1139
Institution: Stanford University, Department of Computer Science
Title: Local Shape from Specularity
Author: Healey, Glenn
Author: Binford, Thomas O.
Date: June 1986
Abstract: We show that highlights in images of objects with specularly reflecting surfaces provide significant information about the surfaces which generate them. A brief survey is given of specular reflectance models which have been used in computer vision and graphics. For our work, we adopt the Torrance-Sparrow specular model which, unlike most previous models, considers the underlying physics of specular reflection from rough surfaces. From this model we derive powerful relationships between the properties of a specular feature in an image and local properties of the corresponding surface. We show how this analysis can be used for both prediction and interpretation in a vision system. A shape from specularity system has been implemented to test our approach. The performance of the system is demonstrated by careful experiments with specularly reflecting objects.
CS-TR-86-1139

Report Number: CS-TR-86-1130
Institution: Stanford University, Department of Computer Science
Title: On Detecting Edges
Author: Nalwa, Vishvjit S.
Author: Binford, Thomas O.
Date: March 1986
Abstract: An edge in an image corresponds to a discontinuity in the intensity surface of the underlying scene. It can be approximated by a piecewise straight curve composed of edgels, i.e., short, linear edge-elements, each characterized by a direction and a position. The approach to edgel-detection here, is to fit a series of one-dimensional surfaces to each window (kernel of the operator) and accept the surface-description which is adequate in the least squares sense and has the fewest parameters. (A one-dimensional surface is one which is constant along some direction.) The tanh is an adequate basis for the step-edge and its combinations are adequate for the roof-edge and the line-edge. The proposed method of step-edgel detection is robust with respect to noise; for (step-size/${\sigma}_{noise}$) >= 2.5, it has subpixel position localization (${\sigma}_{position}$ < 1/3) and an angular localization better than $10^\infty$; further, it is designed to be insensitive to smooth shading. These results are demonstrated by some simple analysis, statistical data and edgel-images. Also included is a comparison, of performance on a real image, with a typical operator (Difference-of-Gaussians). The results indicate that the proposed operator is superior with respect to detection, localization and resolution.
CS-TR-86-1130

Report Number: CS-TR-87-1142
Institution: Stanford University, Department of Computer Science
Title: A Heuristic Refinement for Spacial Constraint Satisfaction Problems
Author: Brinkley, J.
Author: Buchanan, B.
Author: Altman, R.
Author: Duncan, B.
Author: Cornelius, C.
Date: January 1987
Abstract: The problem of arranging a set of physical objects according to a set of constraints is formulated as a geometric constraint satisfaction problem (GCSP), in which the variables are the objects, the possible locations of the objects are the possible values for the variables, and the constraints are geometric constraints between objects. A GCSP is a type of multidimensional constraint satisfaction problem in which the number of objects and/or the number of possible locations per object is too large to permit direct solution by backtrack search. A method is described for reducing these numbers by refinement along two dimensions. The number of objects is reduced by refinement of the structure, representing a group of objects as a single abstract object before considering each object individually. The abstraction used depends on domain specific knowledge. The number of locations per object is reduced by applying node and arc consistency algorithms to refine the accessible volume of each object. Heuristics are employed to control the order of operations (and hence to affect the efficiency of search) but not to change the correctness in the sense that no solutions that would be found by backtrack search are eliminated. Application of the method to the problem of protein structure determination is described.
CS-TR-87-1142

Report Number: CS-TR-87-1144
Institution: Stanford University, Department of Computer Science
Title: Considerations for Multiprocessor Typologies
Author: Byrd, Gregory
Author: Delagi, Bruce
Date: January 1987
Abstract: Choosing a multiprocessor interconnection topology may depend on high-level considerations, such as the intended application domain and the expected number of processors. It certainly depends on low-level implementation details, such as packaging and communications protocols. We first use rough measures of cost and performance to characterize several topologies. We then examine how implementation details can affect the realizable performance of a topology.
CS-TR-87-1144

Report Number: CS-TR-87-1146
Institution: Stanford University, Department of Computer Science
Title: A Point-to-Point Multicast Communications Protocol
Author: Byrd, Gregory
Author: Nakano, Russell
Author: Delagi, Bruce
Date: February 1987
Abstract: Many network topologies have been proposed for connecting a large number of processor-memory pairs in a high-performance multiprocessor system. In terms of performance, however, the communications protocol decisions may be as crucial as topology. This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut- through routing with local flow control is used to provide a high-throughput, low latency communications path between processors. In addition, multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid deadlock during multicasts. A simulated implementation of the protocol is also described.
CS-TR-87-1146

Report Number: CS-TR-87-1147
Institution: Stanford University, Department of Computer Science
Title: A Layered Environment for Reasoning about Action
Author: Hayes-Roth, B.
Author: Garvey, A.
Author: Johnson, M. V.
Author: Hewett, M.
Date: November 1986
Abstract: An intelligent systems reasons about -- controls, explains, learns about -- its action, thereby improving its efforts to achieve goals and function in its environment. In order to perform effectively, a system must have knowledge of the actions it can perform, the events and states that occur, and the relationships among instances of those actions, events and states. We represent such knowledge in a hiearchy of knowledge abstractions and impose uniform standards of knowledge content and representation on modules within each hierarchical level. We refer to the evolving set of such modules as the BB* environment. To illustrate, we describe selected elements of BB*: * the foundational BB1 architecture * the ACCORD framework for solving arrangement problems by means of an assembly method * two applications of BB1-ACCORD, the PROTEAN system for modeling protein structures and the SIGHTPLAN system for designing construction-site layouts * two hypothetical multifaceted systems that integrate ACCORD, PROTEAN and SIGHTPLAN with other possible BB* frameworks and applications.
CS-TR-87-1147

Report Number: CS-TR-87-1148
Institution: Stanford University, Department of Computer Science
Title: An Instrumented Architectural Simulation System
Author: Delagi, B.
Author: Saraiya, N.
Author: Nishimura, S.
Author: Byrd, G.
Date: January 1987
Abstract: Simulation of systems at an architectural level can offer an effective way to study critical design choices if 1. the performance of the simulator is adequate to examine designs executing significant code bodies -- not just toy problems or small application fragments 2. the details of the simulation include the critical details of the design 3. The view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design 4. there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.
CS-TR-87-1148

Report Number: CS-TR-87-1149
Institution: Stanford University, Department of Computer Science
Title: Proceedings from the Nineteenth Annual Meeting of the Stanford Computer Forum
Author: Millen, K. Mac
Author: Diaz-Barriga, A.
Author: Tajnai, C.
Date: February 1987
Abstract: Operating for almost two decades, the Stanford Computer Forum is a cooperative venture of the Computer Science Department and the Computer Systems Laboratory (a laboratory operated jointly by the Computer Science and Electrical Engineering Departments). CSD and CSL are intemationally recognized for their excellence; their faculty members, research staff, and students are widely known for leadership in developing new ideas and trends in the organization, design and use of computers. They are in the forefront of applying research results to a wide range of applications. The Forum holds an annual meeting in February to which three representatives of each member company are invited. The meeting lasts two days and features technical sessions at which timely computer research at Stanford is described by advanced graduate students and faculty members. There are opportunities for informal discussions to complement the presentations. This report includes information on the Forum, the program, abstracts of the talks and viewgraphs used in the presentations.
CS-TR-87-1149

Report Number: CS-TR-87-1153
Institution: Stanford University, Department of Computer Science
Title: Optimum Grip of a Polygon
Author: Markenscoff, Xanthippi
Date: April 1987
Abstract: It has been shown by Baker, Fortune and Grosse that any two-dimensional polygonal object can be prehended stably with three fingers, so that its weight (along the third dimension) is balanced. Besides, in this paper we show that form closure of a polygon object can be achieved by four fingers (previous proofs were not complete). We formulate and solve the problem of finding the optimum stable grip or form closure of any given polygon. For stable grip it is most natural to minimize the forces needed to balance through friction the object's weight along the third dimension. For form closure, we minimize the worst-case forces needed to balance any unit force acting on the center of gravity of the object. The mathematical techniques used in the two instances are an interesting mix of Optimization and Euclidean geometry. Our results lead to algorithms for the efficient computation of the optimum grip in each case.
CS-TR-87-1153

Report Number: CS-TR-87-1154
Institution: Stanford University, Department of Computer Science
Title: A Programming and Problem-Solving Seminar
Author: Rokicki, T. G.
Author: Knuth, D. E.
Date: April 1987
Abstract: This report contains edited transcripts of the discussions held in Stanford's course CS304, Problem Seminar, during winter quarter 1987. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be of interest to graduate students of computer science at other universities, as well to their professors and to professional people in the "real world." The present report is the seventh in a series of such transcripts, continuing the tradition established in STAN- CS-77-606 (Michael J. Clancy, 1977), STAN-CS-79-707 (Chris Van Wyk, 1979), STAN-CS-81-863 (Allan A. Miller, 1981), STAN-CS-83-989 (Joseph S. Weening, 1983), STAN-CS-83-990 (John D. Hobby, 1983), and STAN-CS-85-1055 (Ramsey W. Haddad, 1985).
CS-TR-87-1154

Report Number: CS-TR-87-1155
Institution: Stanford University, Department of Computer Science
Title: Experiments in Automatic Theorem Proving
Author: Bellin, G.
Author: Ketonen, J.
Date: December 1986
Abstract: The experiments described in this report are proofs in EKL of properties of different LISP programs operating different representations of the same mathematical structures -- finite permutations. EKL is an interactive proof checker based upon the language of higher order logic, higher order unification and a decision procedure for a fragment of first order logic. The following questions are asked: What representations of mathematical structure and facts are better suited for formalization and also applicable to several interesting situations? What methods and strategies will make it possible to prove automatically an extensive body of mathematical knowledge? Can higher order logic be conveniently applied in the proof of elementary facts? The fact (*) that finite permutations form a group is proved from the axioms of arithmetic and elementary set theory, via the "Pigeon Hole Principle" (PHP). Permutations are represented (1) as association lists and (2) as lists of numbers. In representation (2) operations on permutations are represented (2.1) using predicates (2.2) using functions. Proofs of (*) using the different representations are compared. The results and conclusions include the following. Methods to control the rewriting process and to replace logic inference by high order rewriting are presented. PHP is formulated as a second order statement which is then easily applied to (1) and (2). This demonstrates the value of abstract, higher order formulation of facts for application in different contexts. A case is given in which representation of properties of programs by predicates may be more convenient than by functions. Evidence is given that convenient organization of proofs into lemmata is essential for large scale computer aided theorem proving.
CS-TR-87-1155

Report Number: CS-TR-87-1156
Institution: Stanford University, Department of Computer Science
Title: The Dynamic Tree Expression Problem
Author: Mayr, Ernst W.
Date: May 1987
Abstract: Presented is a uniform method for obtaining efficient parallel algorithms for a rather large class of problems. The method is based on a logic programming model, and it derives its efficiency form fast parallel routines for the evaluation of expression trees.
CS-TR-87-1156

Report Number: CS-TR-87-1157
Institution: Stanford University, Department of Computer Science
Title: Network Implementation of the DTEP Algorithm
Author: Mayr, E. W.
Author: Plaxton, C. G.
Date: May 1987
Abstract: The dynamic tree expression problem (DTEP) was defined in [Ma87]. In this paper, efficient implementations of the DTEP algorithm are developed for the hypercube, butterfly, perfect shuffle and multidimensional mesh of trees families of networks.
CS-TR-87-1157

Report Number: CS-TR-87-1159
Institution: Stanford University, Department of Computer Science
Title: Muir: A Tool for Language Design
Date: May 1987
Abstract: Muir is a language design environment, intended for use in creating and experimenting with languages such as programming languages, specification languages, grammar forrnalisms, and logical notations. It provides facilities for a language designer to create a language specification, which controls the behavior of generic language manipulating tools typically found in a language-specific environment, such as structure editors, interactive interfaces, storage management and attribute analysis. It is oriented towards use with evolving languages, providing for mixed structures (combining different versions), semi-automated updating of structures from one language version to another, and incremental language specification. A new hierarchical grammar formalism serves as the framework for language specification, with multiple presentation formalisms and a unified interactive environment based on an extended notion of edit operations. A prototype version is operating and has been tested on a small number of languages.
CS-TR-87-1159

Report Number: CS-TR-87-1160
Institution: Stanford University, Department of Computer Science
Title: Strategic Computing Research and the Universities
Date: March 1987
Abstract: The Strategic Computing Initiative offers the potential of new research funds for university computer science departments. As with all funds, they bring benefits and can have unwanted strings attached. In the case of military funding, the web of attached strings can be subtle and confusing. The goal of this paper is to delineate some of these entanglements and perhaps provide some guidance for loosening and eliminating them.
CS-TR-87-1160

Report Number: CS-TR-87-1166
Institution: Stanford University, Department of Computer Science
Title: Parallel Execlltion of OPSS in QLISP
Author: Okuna, H. G.
Author: Gupta, A.
Date: June 1987
Abstract: Production systems (or rule-based systems) are widely used for the development of expert systems. To speed-up the execution of production systems, a number of different approaches are being taken, a majority of them being based on the use of parallelism. In this paper, we explore the issues involved in the parallel implementation of OPS5 (a widely used production-system language) in QLISP (a parallel dialect of Lisp proposed by John McCarthy and Richard Gabriel). This paper shows that QLISP can easily encode most sources of parallelism in OPS5 that have been previously discussed in literature. This is significant because the OPS5 interpreter is the first large program to be encoded in QLISP, and as a result, this is the first practical demonstration of the expressive power of QLISP. The paper also lists the most commonly used QLISP constructs in the parallel implementation (and the contexts in which they are used), which serve as a hint to the QLISP implementor about what to optimize. Also discussed is the exploitation of speculative parallelism in RHS-evaluation for OPSS. This has not been previously discussed in the literature.
CS-TR-87-1166

Report Number: CS-TR-87-1168
Institution: Stanford University, Department of Computer Science
Title: Representing Control Knowledge as Abstract Task and Metarules
Author: Clancey, W. J.
Author: Bock, C.
Date: April 1985
Abstract: A poorly designed knowledge base can be as cryptic as an arbitrary program and just as difficult to maintain. Representing inference procedures abstractly, separately from domain facts and relations, makes the design more transparent and explainable. The combination of abstract procedures and a relational language for organizing domain knowledge provides a generic framework for constructing knowledge bases for related problems in other domains and also provides a useful starting point for studying the nature of strategies. In HERACLES, inference procedures are represented as abstract metarules, expressed in a form of the predicate calculus, organized and controlled as rule sets. A compiler converts the rules into Lisp code and allows domain relations to be encoded as arbitrary data structures for efficiency. Examples are given of the explanation and teaching capabilities afforded by this representation. Different perspectives for understanding HERACLES' inference procedure and how it defines knowledge bases are discussed in some detail.
CS-TR-87-1168

Report Number: CS-TR-87-1170
Institution: Stanford University, Department of Computer Science
Title: Viewing Knowledge Bases as Qualitative Models
Author: Clancey, William J.
Date: May 1986
Abstract: The concept of a qualitative model provides a unifying perspective for understanding how expert systems differ from conventional programs. Knowledge bases contain qualitative models of systems in the world, that is primarily non-numeric descriptions that provide a basis for explaining and predicting behavior and formulating action plans. The prevalent view that a qualitative model must be a simulation, to the exclusion of prototypic and behavioral descriptions, has fragmented our field, so that we have failed to usefully synthesize what we have learned about modeling processes. For example, our ideas about "scoring functions" and "casual network traversal," developed apart from a modeling perspective, have obscured the inherent explanatory nature of diagnosis. While knowledge engineering has greatly benefited from the study of human experts as a means of informing model construction, overemphasis on modeling the expert's knowledge has detracted from the primary objective of modeling a system in the world. Placing AI squarely in the evolutionary line of telelogic and topologic modeling, this talk argues that the study of network representations has established a foundation for a science and engineering of qualitative models.
CS-TR-87-1170

Report Number: CS-TR-87-1173
Institution: Stanford University, Department of Computer Science
Title: Review of Winograd and Flores' Understanding Computers and Cognition
Author: Clancey, William J.
Date: July 1986
Abstract: AI researchers and cognitive scientists commonly believe that thinking involves manipulating representions. Thinking involves search, inference, and making choice. This is how we model reasoning and what goes on in the brain is similar. Winograd and Flores present a radically different view. They claim that our knowledge is not represented in the brain at all, but rather consists of an unformalized shared background, from which we articulate representations in order to cope with new situations. In constrast, computer programs contain only pre-selected objects and properties, and there is no basis for moving beyond this initial formalization when breakdown occurs. Winograd and Flores provide convincing arguments with examples familiar to most AI researchers. However, they significally understate the role of representation in mediating intelligent behavior, specifically in the process of reflection, when representations are generated prior to physical action. Furthermore, they do not consider the practical benefits of expert systems and the extent of what can be accomplished. Nevertheless, the book is crisp and stimulating. It should make AI researchers more cautious about what they are doing, more aware of the nature of formalization, and more open to alternative views.
CS-TR-87-1173

Report Number: CS-TR-87-1174
Institution: Stanford University, Department of Computer Science
Title: Intelligent Tutoring Systerns: A Tutorial Survey
Author: Clancey, William J.
Date: September 1986
Abstract: This survey of Intelligent Tutoring Systems is based on a tutorial originally presented by John Seely Brown, Richard R. Burton (Xerox - PARC, USA) and William J. Clancey at the National Conference on AI (AAAI) in Austin, TX in August, 1984. The survey describes the components of tutoring systems, different teaching scenarios, and their relation to a theory of instruction. The underlying pedagogical approach is to make latent knowledge manifest, which the research accomplishes by different forms of qualitative modeling: simulating physical processes; simulating expert problem-solving, including strategies for montoring and controling problem solving (metacognition); modeling the plans behind procedural behavior; and forcing articulation of model inconsistencies through the Socratic method of instruction.
CS-TR-87-1174

Report Number: CS-TR-87-1177
Institution: Stanford University, Department of Computer Science
Title: Log Files: An Extended File Service Exploiting Write-Once Storage
Author: Finlayson, R. S.
Author: Cheriton, D. R.
Date: August 1987
Abstract: A log service provides efficient storage and retrieval of data that is written sequentially (append-only) and not subsequently modified. Application programs an subsystems use log services for recovery, to record security audit trails, and for perforrnance monitoring. Ideally, a log service should accomodate very large, long-lived logs, and provlde efficient retrieval and low space overhead. In this paper, we describe the design and implementation of the Clio log service. Clio provides the abstraction of log files: readable, append-only files that are accessed in the same way as conventional files. The underlying storage medium is required only to be append-only; more general types of write access are not necessary. We show how log files can be implemented efficiently and robustly on top of such storage media - in particular, write-once. optical disk. In addition, we describe a general application software storage architecture that makes use of log files.
CS-TR-87-1177

Report Number: CS-TR-87-1175
Institution: Stanford University, Department of Computer Science
Title: Using and Evaluating Differential Modeling in Intelligent Tutoring and Apprentice Learning Systems
Author: Wilkin, D. C.
Date: January 1987
Abstract: A powerful approach to debugging and refining the knowledge structures of a problem solving agent is to differentially model the actions of the agent against a gold standard. This paper proposes a framework for exploring the inherent limitations of such an approach when a problem solver is differentially modeled againt an expert system. A procedure is described for determining a performance upper bound for debugging via differential modeling, called the synthetic agent method. The synthetic agent method systematically explores the space of near miss training instances and expresses the limits of debugging in terrns of the knowledge representation and control language constructs of the expert system.
CS-TR-87-1175

Report Number: CS-TR-87-1178
Institution: Stanford University, Department of Computer Science
Title: A Dynamic, Cut-Through Communications Protocol with Multicast
Author: Byrd, G. T.
Author: Nakano, R.
Author: Delagi, B. A.
Date: September 1987
Abstract: This paper describes a protocol to support point-to-point vinterprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition, multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. special packet terminators and selective buffering are introduced to avoid deadlock during multicasts. A simulated implementation of the protocol is also described.
CS-TR-87-1178

Report Number: CS-TR-87-1180
Institution: Stanford University, Department of Computer Science
Title: Bibliography; Department of Computer Science Technical Reports, 1963-1988
Author: Na, Taleen M.
Date: January 1988
Abstract: This report lists, in chronological order, all reports published by the Stanford Computer Science Department (CSD) since 1963. Each report is identified by CSD number, author's name, title, number of pages, and date. If a given report is available from the department at the time of the Bibliography's printing, price is listed. For convenience, an author index, ordering information, codes, and alternative sources are also included.
CS-TR-87-1180

Report Number: CS-TR-87-1181
Institution: Stanford University, Department of Computer Science
Title: On Debugging Rule Sets When Reasoning Under Uncertainty
Author: Wilkins, D. C.
Author: Buchanan, B. G.
Date: May 1987
Abstract: Heuristic inference rules with a measure of strength less than certainty have an unusual property: better individual rules do not necessarily lead to a better overall rule set. All less-than-certain rules contribute evidence towards erroneous conclusions for some problem instances, and the distribution of these erroneous conclusions over the instances is not necessarily related to individual rule quality. This has important consequences for automatic machine learning of rules, since rule selection is usually based on measures of quality of individual rules. In this paper, we explain why the most obvious and intuitively reasonable solution to this probelm, incremental modification and deletion of rules responsible for wrong conclusions a la Teiresias, is not always appropriate. In our experience, it usually fails to converge to an optimal set of rules. Given a set of heuristic rules, we explain why the best rule set should be considered to be the element of the power set of rules that yields a global minimum error with respect to generating erroneous positive and negative conclusions. This selection process is modeled as a bipartite graph minimization problem and shown to be NP-complete. A solution method is described, the Antidote Algorithm, that performs a model-directed search of the rule space. On an example from medical diagnosis, the Antitdote Algortithm signif1cantly reduced the number of misdiagnoses when applied to a rule set generated from 104 training instances.
CS-TR-87-1181

Report Number: CS-TR-87-1182
Institution: Stanford University, Department of Computer Science
Title: Knowledge Base Refinement by Monitoring Abstract Control Knowledge
Author: Wilkins, D. C.
Author: Buchanan, B. G.
Date: August 1987
Abstract: An explicit representation of the problem solving method of an expert system shell as abstract control knowledge provides a powerful foundation for learning. This paper describes the abstract control knowledge of the Heracles expert system shell for heuristic classification problems, and describes how the Odysseus apprenticeship learning program uses this representation to automate "end-game" knowledge acquisition. Particular emphasis is given to showing how abstract control knowledge facilitates the use of underlying domain theories by a learning program.
CS-TR-87-1182

Report Number: CS-TR-87-1183
Institution: Stanford University, Department of Computer Science
Title: The Knowledge Engineer as Student: Metacognitive bases for asking good questions
Author: Clancey, W. J.
Date: January 1987
Abstract: Knowledge engineers are efficient, active leamers. They systematically approach domains and acquire knowledge to solve routine, practical problems. By modeling their methods, we may develop a basis for teaching other students how to direct their own learning. In particular, a knowledge engineer is good at detecting gaps in a knowledge base and asking focused questions to improve an expert system's performance. This ability stems from domain-general knowledge about: problem-solving procedures, the categorization of routine problem-solving knowledge, and domain and task differences. this paper studies these different forms of metaknowledge, and illustrates its incorporation in an intelligent tutoring system. A model of learning is presented that describes how the knowledge engineer detects problem-solving failures and tracks them back to gaps in domain knowledge, which are then reformulated as questions to ask a teacher. We describe how this model of active learning is being developed and tested in a knowledge acquisition program for an expert system.
CS-TR-87-1183

Report Number: CS-TR-87-1184
Institution: Stanford University, Department of Computer Science
Title: Firmware Approach to Fast Lisp Interpreter
Author: Okuno, H.
Author: Osato, N.
Author: Takeuchi, I.
Date: September 1987
Abstract: The approach to speed up a Lisp interpreter by implementing it in firmware seems promising. A microcoded Lisp interpreter shows good performance for very simple benchmarks, while it often fails to provide good performance for larger benchmarks and applications unless speedup techniques are devised for it. This was the case for the TAO/ELIS system. This paper describes various techniques devised for the TAO/ELIS system in order to speed up the interpreter of the TAO language implemented on the ELIS Lisp machine. The techniques include data type dispatch, variable access, function call and so on. TAO is not only upward compatible with Common Lisp, but also incorporates logic programming, object-oriented programming and Fortran/C-like programming into Lisp programming. TAO also provides concurrent programming and supports multiple users (up to eight users). The TAO interpreter for those programming paradigms is coded fully in microcodes. In spite of rich functionalities, the speed of interpreted codes of TAO is comparable to that of compiled codes of commercial Lisp machines. Furthermore, the speeds of the interpreted codes of the same program written in various prograrnming paradigms in TAO does not differ so much. This speed balance is very important for the user. Another outstanding feature of the TAO/ELIS system is its firmware development environments. Micro Assembler and Linker are written in TAO, which enables the user to use the capability of TAO in microcodes. Since debugging tools are also written in mini-Lisp, many new tools were developed in parallel to debugging of microcodes. This high level approach to firmware development environments is very important to provide high productivity of development.
CS-TR-87-1184

Report Number: CS-TR-87-1185
Institution: Stanford University, Department of Computer Science
Title: Blazenet: A Photonic Implementable Wide-Area Network
Author: Haas, Z .
Author: Cheriton, D. R.
Date: December 1987
Abstract: High-performance wide-area networks are required to interconnect clusters of computers connected by local area and metropolitan area networks. Optical fiber technology provides long distance channels in the multi-gigabit per second range. The challenge is to provide switching nodes that handle these data rates with minimum delay, and at a reasonable cost. In this paper, we describe a packet switching network, christened Blazenet, that provides low delay and has minimal memory requirements. It can be extended to support multicast and priority delivery. Such a network can revolutionize the opportunities for distributed command and control, information and resources sharing, real-time conferencing, and wide-area parallel computation, to mention but a few applications.
CS-TR-87-1185

Report Number: CS-TR-87-1186
Institution: Stanford University, Department of Computer Science
Title: A Hierarchy of Temporal Properties
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: October 1987
Abstract: We propose a classification of temporal properties into a hierarchy which refines the known safety-liveness classification of properties. The new classification recognizes the classes of safety, guarantee, persistence, fairness, and hyper-fairness. The classification suggested here is based on the different ways a property of finite computations can be extended into a property of infinite computations. For properties that are expressible by temporal logic and predicate automata, we provide a syntactic characterization of the formulae and automata that specify properties in the different classes. We consider the verification of properties over a given program, and provide a unique proof principle for each class.
CS-TR-87-1186

Report Number: CS-TR-87-1188
Institution: Stanford University, Department of Computer Science
Title: Experiments with a Knowledge-Based System on a Multiprocessor
Author: Nakano, Russell
Author: Minami, Masafumi
Date: October 1987
Abstract: This paper documents the results we obtained and the lessons we learned in the design, implementation, and execution of a simulated real-time application on a simulated parallel processor. Specifically, our parallel program ran 100 times faster on a 100-processor multiprocessor. The machine architecture is a distributed-memory multiprocessor. The target machine consists of 10 to 1000 processors, but because of simulator limitations, we ran simulations of machines consisting of 1 to 100 processors. Each processor is a computer with its own local memory, executing an independent instruction stream. There is no global shared memory; all processes communicate by message passing. The target programming environment, called Lamina, encourages a programming style that stresses performance gains through problem decomposition, allowing many processors to be brought to bear on a problem. THe key is to distribute the processing load over replicated objects, and to incresase throughput by building pipelined sequences of objects that handle stages of problem solving. We focused on a knowledge-based application that simulates real-time understanding of radar tracks, called Airtrac. This paper describes a portion of the Airtrac application implemented in Lamina and a set of experiments that we performed. We confirmed the following hypotheses: 1) Performance of our concurrent program improves with additional processors, and thereby attains a significant level of speedup. 2) Correctness of our concurrent program can be maintained despite a high degree of problem decomposition and highly overloaded input data conditions.
CS-TR-87-1188

Report Number: CS-TR-87-1189
Institution: Stanford University, Department of Computer Science
Title: Instrumented Architectural Simulation
Author: Delagi, Bruce A.
Author: Saraiya, Nakul
Author: Nishimura, Sayuri
Author: Byrd, Greg
Date: November 1987
Abstract: Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies -- not just toy problems or small application fragments, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involoved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardsware system architectures is illustrated.
CS-TR-87-1189

Report Number: CS-TR-88-1193
Institution: Stanford University, Department of Computer Science
Title: Mathematical Wring
Author: Knuth, Donald E.
Author: Larrabee, Tracy
Author: Roberts, Paul M.
Date: January 1988
Abstract: This report is based on a course of the same name given at Stanford University during autumn quarter, 1987. Here's the catalog description: CS 209. Mathematical Writing--Issues of technical writing and the effective presentation of mathematics and computer science. Preparation of theses, papers, books, and "literate" computer programs. A term paper on a topic of your choice; this paper may be used for credit in another course.
CS-TR-88-1193

Report Number: CS-TR-88-1195
Institution: Stanford University, Department of Computer Science
Author: Bar-Noy, A.
Author: Linial, N.
Author: Peleg, D.
Date: February 1988
Abstract: A radio network is a synchronous network of processors that communicate by transmitting messages to their neighbors, where a processor receives a message in a given step if and only if it is silent in this step and precisely one of its neighbors transmits. In this paper we prove the existence of a family of radius-2 networks on n vertices for which any broadcast schedule requires at least Omega((log n/ log log n)2) rounds of transmissions. This almost matches an upper bound of O(log2 n) rounds for networks of radius 2 proved earlier by Bar-Yehuda, Goldreich, and Itai.
CS-TR-88-1195

Report Number: CS-TR-88-1196
Institution: Stanford University, Department of Computer Science
Title: Motion Planning with Uncertainty: The Preimage Backchaining Approach
Author: Latombe, Jean-Claude
Date: March 1988
Abstract: This paper addresses the problem of planning robot motions in the presence of uncertainty. It explores an approach to this problem, known as the preimage backchaining approach. Basically, a preimage is a region in space, such that if the robot executes a certain motion command from within this region, it is guaranteed to attain a target and to terminate into it. Preimage backchaining consists of reasoning backward from a given goal region, by computing preimages of the goal, and then recursively preimages of the preimages, until some preimages include the initial region where it is known at planning time that the robot will be before executing the motion plan.
CS-TR-88-1196

Report Number: CS-TR-88-1197
Institution: Stanford University, Department of Computer Science
Title: The VMP Multiprocessor: Initial Experience, Refinements and Performance Evaluation
Author: Cheriton, D. R.
Author: Gupta, A.
Author: Boyle, P. D.
Author: Goosen, H. A.
Date: March 1988
Abstract: VMP is an experimental multiprocessor being developed at Stanford University, suitable for high-performance workstations and server machines. Its primary novelty lies in the use of software management of the preprocessor caches and the design decisions in the cache and bus that make this approach feasible. The design and some uniprocessor trace-driven simulations indicating its perforrnance have been reported previously. In this paper, we present our initial experience with the VMP design based on a running prototype as well as various refinements to the design. Performance evaluation is based both on measurement of actual execution as well as trace-driven simulation of multiprocessor executions from the Mach operating system.
CS-TR-88-1197

Report Number: CS-TR-88-1199
Institution: Stanford University, Department of Computer Science
Title: Projections of Vector Addition System Reachability Sets are Semilinear
Author: Buning, H. K.
Author: Lettman, T.
Author: Mayr, E. W.
Date: March 1988
Abstract: The reachability sets of Vector Addition Systems of dimension six or more can be non-semilinear. This may be one reason why the inclusion problem (as well as the equality problem) for reachability sets of vector addition systems in general is undecidable, even though the reachability problem itself is known to be decidable. We show that any one-dimensional projection of the reachability set of an arbitrary vector addition system is semilinear, and hence, "simple".
CS-TR-88-1199

Report Number: CS-TR-88-1200
Institution: Stanford University, Department of Computer Science
Title: Parallel Approximation Algorithms for Bin Packing
Author: Anderson, R. J.
Author: Mayr, E. W.
Author: Warmuth, M. K.
Date: March 1988
Abstract: We study the parallel complexity of polynomial heuristics for the bin packing problem. We show that some well-known (and simple) moethods like first-fit- decreasing are P-complete, and it is hence very unlikely that they can be efficiently parallelized. On the other hand, we exhibit an optimal NC algorithm that achieves the same performance bound as does FFD. Finally, we discuss parallelization of polynomial approximation algorithms for bin packing based on discretization.
CS-TR-88-1200

Report Number: CS-TR-88-1203
Institution: Stanford University, Department of Computer Science
Title: On the Semantics of Temporal Logic Prograrnrning
Author: Baudinet, Marianne
Date: June 1988
Abstract: Recently, several researchers have suggested directly exploiting in a programming language temporal logic's ability to describe changing worlds. The resulting languages are quite diverse. They are based on different subsets of temporal logic and use a variety of execution mechanisms. So far, little attention has been paid to the formal semantics of these languages. In this paper, we study the semantics of an instance of temporal logic programming, namely, the TEMPLOG language defined by Abadi and Manna. We first give declarative semantics for TEMPLOG, in model-theoretic and in fixpoint terms. Then, we study its operational semantics and prove soundness and completeness theorems for the temporal-resolution proof method underlying its execution mechanism.
CS-TR-88-1203

Report Number: CS-TR-88-1206
Institution: Stanford University, Department of Computer Science
Title: A Parallel Lisp Simulator
Author: Weening, Joseph S.
Date: May 1988
Abstract: CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an examples its use with a parallel Lisp program.
CS-TR-88-1206

Report Number: CS-TR-88-1208
Institution: Stanford University, Department of Computer Science
Title: Toetjes
Author: Feder, Tomas
Date: June 1988
Abstract: A number is secretly chosen from the interval [0, 1], and n players try to guess this number. When the secret number is revealed, the player with the closest guess wins. We describe an optimal strategy for a version of this game.
CS-TR-88-1208

Report Number: CS-TR-88-1209
Institution: Stanford University, Department of Computer Science
Title: Combinatorial Algorithms for the Generalized Circulation Problem
Author: Goldberg, A. V.
Author: Plotkin, S. A.
Author: Tardos, E.
Date: June 1988
Abstract: We consider a generalization of the maximum flow problem in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if x(e) units of flow enter an arc e, x(e) gamma(e) units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. We require conservation of flow at every node except a given source node. The goal is to maximize the amount of flow excess at the source. This problem is a special case of linear programming, and therefore can be solved in polynomial time. In this paper we present the first polynomial time combinatorial algorithms for this problem. The algorithms are simple and intuitive.
CS-TR-88-1209

Report Number: CS-TR-88-1211
Institution: Stanford University, Department of Computer Science
Title: Sublinear-Time Parallel Algorithms
Author: Goldberg, A. V.
Author: Plotkin, S. A.
Author: Vaidya, P. M.
Date: June 1988
Abstract: This paper presents the first sublinear-time deterministic parallel algorithms for bipartite matching and several related problems, including maximal node-disjoint paths, depth-first search, and flows in zero-one networks. Our results are based on a better understanding of the combinatorial structure of the above problems, which leads to new algorithmic techniques. In particular, we show how to use maximal matching to extend, in parallel, a current set of node-disjoint paths and how to take advantage of the parallelism that arises when a large number of nodes are "active" during an execution of a push/relabel network flow algorithm. We also show how to apply our techniques to design parallel algorithms for the weighted versions of the above problems. In particular, we present sublinear-time deterministic parallel algorithms for finding a minimum-weight bipartite matching and for finding a minimum-cost flow in a network with zero-one capacities, if the weights are polynomially bounded integers.
CS-TR-88-1211

Report Number: CS-TR-88-1210
Institution: Stanford University, Department of Computer Science
Title: String-Functional Semantics for Formal Verification of Synchronous Circuits
Author: Bronstein, Alexandre
Author: Talcott, Carolyn L.
Date: June 1988
Abstract: A new functional semantics is proposed for synchronous circuits, as a basis for reasoning formally about that class of hardware systems. Technically, we define an extensional semantics with monotonic length-preserving functions on finite strings, and an intensional semantics based on functionals on those functions. As support for the semantics we prove the equivalence of the extensional semantics with a simple operational semantics, as well as a characterization of circuits which obey the "every loop is clocked" design rule. Also, we develop the foundations in complete detail both to increase confidence in the theory, and as a prerequisite to its future mechanization.
CS-TR-88-1210

Report Number: CS-TR-88-1214
Institution: Stanford University, Department of Computer Science
Title: Multicast Routing in Internetworks and Extended LANs
Author: Deering, Stephen E.
Date: June 1988
Abstract: Multicasting is used within local-area networks to make distributed applications more robust and more efficient. The growing need to distribute applications across multiple, interconnected networks, and the increasing availability of high-performance, high-capacity switching nodes and networks, lead us to consider providing LAN-style multicasting across an internetwork. In this paper, we propose extensions to two common internetwork routing algorithms -- distance-vector routing and link-state routing -- to support low-delay datagram multicasting. We also suggest modifications to the single-spanning-tree routing algorithm, commonly used by link-layer bridges, to reduce the costs of multicasting in large extended LANs. Finally, we show how different link-layer and network-layer multicast routing algorithms can be combined hierarchically to support multicasting across large, heterogeneous internetworks.
CS-TR-88-1214

Report Number: CS-TR-88-1218
Institution: Stanford University, Department of Computer Science
Title: Square Meshes are not always Optimal
Author: Bar-Noy, Amotz
Author: Peleg, David
Date: August 1988
Abstract: In this paper we consider mesh connected computers with multiple buses, providing broadcast facilities along rows and columns.
CS-TR-88-1218

Report Number: CS-TR-88-1225
Institution: Stanford University, Department of Computer Science
Title: Parallel Approximation Algorithms
Author: Mayr, Ernst W.
Date: September 1988
Abstract: Many problems of great practical importance are hard to solve computationally, at least if exact solutions are required. We survey a number of (NP- or P-complete) problems for which fast parallel approximation algorithms are known: The 0-1 knapsack problem, binpacking, the minimal makeshift problem, the list scheduling problem, greedy scheduling, and the high density subgraph problem. Algorithms for these problems are presented highlighting the underlying techniques and principles, and several types of parallel approximation schemes are exhibited.
CS-TR-88-1225

Report Number: CS-TR-88-1226
Institution: Stanford University, Department of Computer Science
Author: Hayes-Roth, Barbara
Date: October 1988
Abstract: Contemporary intelligent systems are isolated problem-solvers. They accept particular classes of problems, reason about them, perhaps request additional information, and eventually produce solutions. By contrast, human beings and other intelligent animals continuously adapt to the demands and opportunities presented by a dynamic environment. Adaptation plays a critical role in everyday behaviors, such as conducting a conversation, as well as in sophisticated professional behaviors, such as monitoring critically ill medical patients. To make intelligent systems similarly adaptive, we must augment their reasoning capabilities with capabilities for perception and action. Equally important, we must endow them with an attentional mechanism to allocate their limited computational resources among competing perceptions, actions, and cognitions, in real time. In this paper, we discuss functional objectives for "adaptive intelligent systems," an architecture designed to achieve those objectives, and our continuing study of both objectives and architecture in the context of particular tasks.
CS-TR-88-1226

Report Number: CS-TR-88-1227
Institution: Stanford University, Department of Computer Science
Title: Finding Minimum-Cost Flows by Double-Scaling
Author: Ahuja, R. K.
Author: Goldberg, A. V.
Author: Orlin, J. B.
Author: Tarjan, R. E.
Date: October 1988
Abstract: Several researchers have recently developed new techniques that give fast algorithms for the minimum-cost flow problem. In this paper we combine several of these techniques to yield an algorithm running in O(nm log log Ulog(nC)) time on networks with n vertices, m edges, maximum arc capacity U, and maximum arc cost magnitude C. The major techniques used are the capacity-scaling approach of Edmonds and Karp, the excess-scaling approach of Ahuja and Orlin, the cost-scaling approach Goldberg and Tarjan, and the dynamic tree data structure of Sleator and Tarjan. For nonsparse graphs with large maximum arc capacity, we obtain a similar but slightly better bound. We also obtain a slightly better bound for the (noncapacitated) transportation problem. In addition, we discuss a capacity-bounding approach to the minimum-cost flow problem.
CS-TR-88-1227

Report Number: CS-TR-88-1228
Institution: Stanford University, Department of Computer Science
Title: A Parallel Algorithm for Finding a Blocking Flow in an Acyclic Network
Author: Goldberg, A. V.
Author: Tarjan, R. E.
Date: November 1988
Abstract: We propose a simple parallel algorithm for finding a blocking flow in an acyclic network. On an n-vertex, m-arc network, our algorithm runs in O(n log n) time and O(nm) space using an m-processor EREW PRAM. A consequence of our algorithm is an O(n2 (log n) log (nC)-time, O(nm)-space, m-processor algorithm for the minimum-cost circulation problem, on a network with integer arc capacities of magnitude at most C.
CS-TR-88-1228

Report Number: CS-TR-88-1229
Institution: Stanford University, Department of Computer Science
Title: Distributing Intelligence within an Individual
Author: Hayes-Roth, B.
Author: Hewett, M.
Author: Washington, R.
Author: Hewett, R.
Author: Seiver, A.
Date: November 1988
Abstract: Distributed artificial intelligence (DAI) refers to systems in which decentralized, cooperative agents work synergistically to perform a task. Altemative specifications of DAI resemble particular biological or social systems, such as teams, contract nets, or societies. Our DAI model resembles a single individual comprising multiple loosely coupled agents for perception, action, and cognition functions. We demonstrate the DAI individual in the Guardian system for intensive-care monitoring and argue that it is more appropriate than the prevalent team model for a large class of similar applications.
CS-TR-88-1229

Report Number: CS-TR-88-1230
Institution: Stanford University, Department of Computer Science
Title: Specification and Verification of Concurrent Programs by For-All Automata
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: November 1988
Abstract: For-all automata are non-deterministic finite-state automata over infinite sequences. They differ from conventional automata in that a sequence is accepted if all runs of the automaton over the sequence are accepting. These automata are suggested as a formalism for the specification and verification of temporal properties of concurrent programs. It is shown that they are as expressive as extended temporal logic (ETL), and, in some cases, provide a more compact representation of properties than temporal logic. A structured diagram notation is suggested for the graphical representation of these automata. A single sound and complete proof rule is presented for proving that all computations of a program have the property specified by a for-all automaton.
CS-TR-88-1230

Report Number: CS-TR-88-1233
Institution: Stanford University, Department of Computer Science
Title: A Procedural Semantics for Well Founded Negation in Logic Programs
Author: Ross, Kenneth A.
Date: November 1988
Abstract: We introduce global SLS-resolution, a procedural semantics for well-founded negation as defined by Van Gelder, Ross and Schlipf. Global SLS-resolution extends Przymusinski's SLS-resolution, and may be applied to all programs, whether locally stratified or not. Global SLS-resolution is defined in terms of global trees, a new data structure representing the dependence of goals on derived negative subgoals. We prove that global SLS-resolutlon is sound with respect to the well-founded semantics, and complete for non-floundering queries.
CS-TR-88-1233

Report Number: CS-TR-88-1234
Institution: Stanford University, Department of Computer Science
Title: The Average Number of Stable Matchings
Author: Pittel, Boris
Date: December 1988
Abstract: The probable behavior of an instance of size n of the stable marriage problem, chosen uniformly at random, is studied.
CS-TR-88-1234

Report Number: CS-TR-88-1236
Institution: Stanford University, Department of Computer Science
Title: Time for Action: On the Relation between Time, Knowledge, and Action
Author: Shoham, Yoav
Date: December 1988
Abstract: We consider the role played by the concept of action in AI. We first briefly summarize the advantages and limitations of past approaches to taking the concept as primitive, as embodied in the situation calculus and dynamic logic. We also briefly summarize the alternative, namely adopting a temporal framework, and point out its complementary advantages and limitations. We then propose a framework that retains the advantages of both viewpoints, and that ties the notion of action closely to that of knowledge. Specifically, we propose starting with the notion of time lines, and defining the notion of action as the ability to make certain choices among sets of time lines. Our definitions shed new light on the connection between time, action, knowledge and ignorance, choice-making, feasibility, and simultaneous reasoning about the same events at different levels of detail.
CS-TR-88-1236

Report Number: CS-TR-88-1237
Institution: Stanford University, Department of Computer Science
Title: Belief as Defeasible Knowledge
Author: Shoham, Yoav
Author: Moses, Yoram
Date: December 1988
Abstract: We investigate the relation between the notions of knowledge and belief. Contrary to the well-known slogan about knowledge being "justified, true belief," we propose that belief be viewed as defeasible knowledge. Specifically, we offer a definition of belief as knowledge-relative-to-assumptions, and tie the definition to the notion of nonmonotonicity. Our definition has several advantages. First, it is short. Second, we do not need to add anything to the logic of knowledge: the right properties of belief fall out of the definition and the properties of knowledge. Third, the connection between knowledge and belief is derived from one fundamental principle, which is more enlightening than a collection of arbitrary-seeming axioms relating the two notions.
CS-TR-88-1237

Report Number: CS-TR-88-1239
Institution: Stanford University, Department of Computer Science
Title: Sorting, Minimal Feedback Sets and Hamilton Paths in Tournaments
Author: Bar-Noy, Amotz
Author: Naor, Joseph
Date: December 1988
Abstract: We present a general method for translating sorting by comparisons algorithms to algorithms that compute a Hamilton path in a tournament. The translation is based on the relation between minimal feedback sets and Hamilton paths in tournaments. We prove that there is a one to one correspondence between the set of minimal feedback sets and the set of Hamilton paths. In the comparison model, all the tradeoffs for sorting between the number of processors and the number of rounds hold when a Hamilton path is computed. For the CRCW model, with O(n) processors, we show the following: (i) Two paths in a tournament can be merged in O(log log n) time (Valiant's algorithm): (ii) a Hamilton path can be computed in O (log n) time (Cole's algorithm). This improves a previous algorithm for computing a Hamilton path.
CS-TR-88-1239

Report Number: CS-TR-88-1240
Institution: Stanford University, Department of Computer Science
Title: On Separat1ng the EREW and CREW PRAM Models
Author: Gafni, E.
Author: Naor, J.
Author: Ragde, P.
Date: December 1988
Abstract: In [6], Snir proposed the Selection Problem (searching in a sorted table) to show that the CREW PRAM is strictly more powerful than the EREW PRAM. This problem defines a partial function, that is, one that is defined only on a restricted set of inputs. Recognizing whether an arbitrary input belongs to this restricted set is hard for both CREW and EREW PRAMs. The existence of a total function that exhibits the power of the CREW model over the EREW model was an open problem. Here we solve this problem by generalizing the Selection problem to a Decision Tree problem which is defined on a full domain and to which Snir's lower bound applies.
CS-TR-88-1240

Report Number: CS-TR-89-1244
Institution: Stanford University, Department of Computer Science
Title: Software Performance on Nonlinear Least-Squares Problems
Author: Fraley, Christina
Date: January 1989
Abstract: This paper presents numerical results for a large and varied set of problems using sofware that is widely available and has undergone extensive testing. The algorithms implemented in this software include Newton-based linesearch and trust-region methods for unconstrained optimization, as well as Gauss-Newton, Levenberg-Marquardt, and special quasi-Newton methods for nonlinear least squares. Rather than give a critical assessment of the software itself, our original purpose was to use the best available software to compare the underlying algorithms, to identify classes of problems for each method on which the performance is either very good or very poor and to provide benchmarks for future work in nonlinear least squares and unconstrained optimization. The variability in the results made it impossible to meet either of the first two goals; however the results are significant as a step toward explaining why thesse aims are so difficult to accomplish.
CS-TR-89-1244

Report Number: CS-TR-89-1248
Institution: Stanford University, Department of Computer Science
Title: Efficiency of the network simplex algorithm for the maximum flow problem
Author: Goldberg, Andrew V.
Author: Tarjan, Robert E.
Date: February 1989
Abstract: Goldfarb and Hao have proposed a network simplex algorithm that will solve a maximum flow problem on an n-vertex, m-arc network in at most nm pivots and O(n2m) time. In this paper we describe how to implement their algorithm to run in O(nm log n) time by using an extension of the dynamic tree data structure of Sleator and Tarjan. This bound is less than a logarithmic factor larger than that of any other known algorithm for the problem.
CS-TR-89-1248

Report Number: CS-TR-89-1250
Institution: Stanford University, Department of Computer Science
Title: A sound and complete axiomatization of operational equivalence between programs with memory
Author: Mason, Ian
Author: Talcott, Carolyn
Date: March 1989
Abstract: In this paper we present a formal system for deriving assertions about programs with memory. The assertions we consider are of the following three forms: (i) e diverges (i.e. fails to reduce to a value), written $\arru e$; (ii) $e_O$ and $e_1$ reduce to the same value and have exactly the same effect on memory, written $e_O \bksimlr e_1$; and (iii) $e_O$ and $e_1$ reduce to the same value and have the same effect on memory up to production of garbage (are strongly isomorphic), written $_O \bksimeq e_1$. The e, $e_j$ are expressions of a first-order Scheme- or Lisp-like language with the data operations atom, eq, car, cdr, cons, setcar, setcdr, the control primitives let and if, and recursive definition of function symbols.
CS-TR-89-1250

Report Number: CS-TR-89-1255
Institution: Stanford University, Department of Computer Science
Title: METAFONTware
Author: Knuth, Donald E.
Author: Rokicki, Tomas G.
Author: Samuel, Arthur L.
Date: May 1989
Abstract: This report contains the complete WEB documentation for four utility programs that are often used in conjunction with METAFONT: GFtype, GFtoPK, GFtoDVI, and MFT. This report is analogous to TeXware, published in 1986 (STAN-CS-86-1097). METAFONTware completes the set.
CS-TR-89-1255

Report Number: CS-TR-89-1259
Institution: Stanford University, Department of Computer Science
Title: Interior-point methods in parallel computation
Author: Goldberg, Andrew V.
Author: Plotkin, Serge A.
Author: Shmoys, David B.
Author: Tardos, Eva
Date: May 1989
Abstract: ln this paper we use interior-point methods for linear programing, developed in the context of sequential computation, to obtain a parallel algorithm for the bipartite matching problem. Our algorithm runs in $O^n$(SQRT m) time. Our results extend to the weighted bipartite matching problem and to the zero-one minimum-cost flow problem, yielding $O^n$((SQRT m) log C) algorithms. This improves previous bounds on these problems and illustrates the importance of interior-point methods in the context of parallel algorithm design.
CS-TR-89-1259

Report Number: CS-TR-89-1261
Institution: Stanford University, Department of Computer Science
Title: Pipelined parallel computations, and sorting on a pipelined hypercube.
Author: Mayr, Ernst W.
Author: Plaxton, C. Greg
Date: May 1989
Abstract: This paper brings together a number of previously known techniques in order to obtain practical and efficient implementations of the prefix operation for the complete binary tree, hypercube and shuffle exchange families of networks. For each of these networks, we also provide a "pipelined" scheme for performing k prefix operations in O(k + log p) time on p processors. This implies a similar pipelining result for the "data distribution" operation of Ullman [16]. The data distribution primitive leads to a simplified implementation of the optimal merging algorithm of Varman and Doshi, which runs on a pipelined model of the hypercube [17]. Finally, a pipelined version of the multi-way merge sort of Nassimi and Sahni [10], running on the pipelined hypercube model, is described. Given p processors and n < p log p values to be sorted, the running time of the pipelined algorithm is O(log2 p/log((p log p)/n)). Note that for the interesting case n = p this yields a running time of 0(log2 p/log log p), which is asymptotically faster than Batcher's bitonic sort[3].
CS-TR-89-1261

Report Number: CS-TR-89-1264
Institution: Stanford University, Department of Computer Science
Title: Chebyshev polynomials are not always optimal
Author: Fischer, Bernd
Author: Freund, Roland
Date: June 1989
Abstract: We are concerned with the problem of finding among all polynomials of degree at most n and normalized to be 1 at c the one with minimal uniform norm on Epsilon. Here, Epsilon is a given ellipse with both foci on the real axis and c is a given real point not contained in Epsilon. Problems of this type arise in certain iterative matrix computations, and, in this context, it is generally believed and widely referenced that suitably normalized Chebyshev polynomials are optimal for such constrained approximation problems. In this note, we show that this is not true in general. Moreover, we derive sufficient conditions which guarantee that Chebyshev polynomials are optimal. Also, some numerical examples are presented.
CS-TR-89-1264

Report Number: CS-TR-89-1266
Institution: Stanford University, Department of Computer Science
Title: Multi-level shared caching techniques for scalability in VMP-MC
Author: Cheriton, David R.
Author: Goosen, Hendrik A.
Author: Boyle, Patrick D.
Date: May 1989
Abstract: The problem of building a scalable shared memory multiprocessor can be reduced to that of building a scalable memory hierarchy, assuming interprocessor communication is handled by the memory system. In this paper, we describe the VMP-MC design, a distributed parallel multi-computer based on the VMP multiprocessor design, that is intended to provide a set of building blocks for configuring machines from one to several thousand processors. VMP-MC uses a memory hierarchy based on shared caches, ranging from on-chip caches to board-level caches connected by busses to, at the bottom, a high-speed fiber optic ring. In addition to describing the building block components of this architecture, we identify the key performance issues associated with the design and provide performance evaluation of these issues using trace-drive simulation and measurements from the VMP.
CS-TR-89-1266

Report Number: CS-TR-89-1267
Institution: Stanford University, Department of Computer Science
Title: A really temporal logic.
Author: Alur, Rajeev
Author: Henzinger, Thomas A.
Date: July 1989
Abstract: We introduce a real-time temporal logic for the specification of reactive systems. The novel feature of our logic, TPTL, is the adoption of temporal operators as quantifiers over time variables; every modality binds a variable to the time(s) it refers to. TPTL is demonstrated to be both a natural specification language as well as a suitable formalism for verification and synthesis. We present a tableau-based decision procedure and model-checking algorithm for TPTL. Several generalizations of TPTL are shown to be highly undecidable.
CS-TR-89-1267

Report Number: CS-TR-89-1268
Institution: Stanford University, Department of Computer Science
Author: Floyd, Robert W.
Author: Knuth, Donald E.
Date: July 1989
Abstract: An addition machine is a computing device with a finite number of registers, limited to the following six types of operations: read x {input to register x} x <-- y {copy register y to register x} x <-- x + y {add register y to register x} x <-- x - y {subtract register y from register x} if x >= y {compare register x to register y} write x {output from register x} The register contents are assumed to belong to a given set A, which is an additive subgroup of the real numbers. If A is the set of all integers, we say the device is an integer addition machine; if A is the set of all real numbers, we say the device is a real addition machine. We will consider how efficiently an integer addition machine can do operations such multiplication, division, greatest common divisor, exponentiation, and sorting. We will also show that any addition machine with at least six registers can compute the ternary operation x[y/z] with reasonable efficiency, given x, y, z in A with z not equal to 0.
CS-TR-89-1268

Report Number: CS-TR-89-1269
Institution: Stanford University, Department of Computer Science
Title: A programming and problem solving seminar
Author: Ross, Kenneth A.
Author: Knuth, Donald E.
Date: July 1989
Abstract: This report contains edited transcripts of the discussions held in Stanford's Computer Science problem solving course, CS304, during winter quarter 1989. Since the topics span a large range of ideas in computer science, and since most of the important research paradigms and programming paradigms were touched on during the discussions, these notes may be of interest to graduate students of computer science at other universities, as well as to their professors and to professional people in the "real world." The present report is the eighth in a series of such transcripts, continuing the tradition established in STAN-CS-77-606 (Michael J. Clancy, 1977), STAN-CS-79-707 (Chris Van Wyk, 1979), STAN-CS-81-863 (Allan A. Miller, 1981), STAN-CS-83-989 (Joseph S. Weening, 1983), STAN-CS-83-990 (John D. Hobby, 1983), STAN-CS-85-1055 (Ramsey W. Haddad, 1985) and STAN-CS-87-1154 (Tomas G. Rokicki, 1987).
CS-TR-89-1269

Report Number: CS-TR-89-1273
Institution: Stanford University, Department of Computer Science
Title: Sirpent[TM]: a high-performance internetworking approach
Author: Cheriton, David R.
Date: July 1989
Abstract: A clear target for computer communication technology is to support a high-performance global internetwork. Current internetworking approaches use either concatenated virtual circuits, as in X.75, or a "universal" internetwork datagram, as in the DoD Internet IP protocol and the IS0 connectionless network protocol (CLNP). Both approaches have significant disadvantages. This paper describes Sirpent[TM] (Source Internetwork Routing Protocol with Extended Network Transfer), a new approach to an internetwork architecture that makes source routing the basis for interconnection, rather than an option as in IP. Its benefits include simple switching with low per-packet processing and delay, support for accounting and congestion control, and scalability to a global internetwork. It also supports flexible, user-controlled routing such as required for security, policy-based routing and real-time applications. We also propose a specific internetwork protocol, called VIPER[TM], as a realization of the Sirpent approach.
CS-TR-89-1273

Report Number: CS-TR-89-1275
Institution: Stanford University, Department of Computer Science
Title: A new approach to stable matching problems
Author: Subramanian, Ashok
Date: August 1989
Abstract: We show that Stable Matching problems are the same as problems about stable configurations of X-networks. Consequences include easy proofs of old theorems, a new simple algorithm for finding a stable matching, an understanding of the difference between Stable Marriage and Stable Roommates, NP-completeness of Three-party Stable Marriage, CC-completeness of several Stable Matching problems, and a fast parallel reduction from the Stable Marriage problem to the Assignment problem.
CS-TR-89-1275

Report Number: CS-TR-89-1276
Institution: Stanford University, Department of Computer Science
Title: On the network complexity of selection
Author: Plaxton, C. Greg
Date: August 1989
Abstract: The selection problem is to determine the kth largest out of a given set of n keys, and its sequential complexity is well known to be linear. Thus, given a p processor parallel machine, it is natural to ask whether or not an O(n/p) selection algorithm can be devised for that machine. For the EREW PRAM, Vishkin has exhibited a straightforward selection algorithm that achieves optimal speedup for n = Omega(p log p log log p) [18]. For the network model, the sorting result of Leighton [12] and the token distribution result of Peleg and Upfal [13] together imply that Vishkin's algorithm can be adapted to run in the same asymptotic time bound on a certain class of bounded degree expander networks. On the other hand, none of the network families currently of practical interest have sufficient expansion to permit an efficient implementation of Vishkin's algorithm. The main result of this paper is an Omega((n/p) log log p + log p) lower bound for selection on any network that satisfies a particular low expansion property. The class of networks satisfying this property includes all of the common network families such as the tree, multi-dimensional mesh, hypercube, butterfly and shuffle exchange. When n/p is sufficiently large (for example, greater than log2 p on the butterfly, hypercube and shuffle exchange), this result is matched by the upper bound presented in [14].
CS-TR-89-1276

Report Number: CS-TR-89-1278
Institution: Stanford University, Department of Computer Science
Title: The complexity of circuit value and network stability
Author: Mayr, Ernst W.
Author: Subramanian, Ashok
Date: August 1989
Abstract: We develop a method for non-trivially restricting fanout in a circuit. We study the complexity of the Circuit Value problem and a new problem, Network Stability, when fanout is limited. This leads to new classes of problems within P. We conjecture that the new classes are different from P and incomparable to NC. One of these classes, CC, contains several natural complete problems, including Circuit Value for comparator circuits, Lex-first Maximal Matching, and problems related to Stable Marriage and Stable Roommates. When fanout is appropriately limited, we get positive results: a parallel algorithm for Circuit Value that runs in time about the square root of the number of gates, a linear-time sequential algorithm for Network Stability, and logspace reductions between Circuit Value and Network Stability.
CS-TR-89-1278

Report Number: CS-TR-89-1280
Institution: Stanford University, Department of Computer Science
Title: Sticky bits and universality of consensus
Author: Plotkin, Serge A.
Date: August 1989
Abstract: In this paper we consider implementation of atomic wait-free objects in the context of a shared-memory multiprocessor. We introduce a new primitive object, the "Sticky-Bit", and show its universality by proving that any safe implementation of a sequential object can be transformed into a wait-free atomic one using only Sticky Bits and safe registers. The Sticky Bit may be viewed as a memory-oriented version of consensus. In particular, the results of this paper imply "universality of consensus" in the sense that given an algorithm to achieve n-processor consensus, we can transform any safe implementation of a sequential object into a wait-free atomic one using polynomial number of additional safe bits. The presented results also imply that the Read-Modify-Write (RMW) hierarchy "collapses". More precisely, we show that although an object that supports a 1-bit atomic wait-free RMW is strictly more powerful than safe register and an object that supports 3-valued atomic wait-free RMW is strictly more powerful than 1-bit RMW, the 3-value RMW is universal in the sense that any RMW can be atomically implemented from a 3-value atomic RMW in a wait-free fashion.
CS-TR-89-1280

Report Number: CS-TR-89-1281
Institution: Stanford University, Department of Computer Science
Title: Load balancing on the hypoercube and shuffle-exchange
Author: Plaxton, C. Greg
Date: August 1989
Abstract: Maintaining a balanced load is of fundamental importance on any parallel computer, since a strongly imbalanced load often leads to low processor utilization. This paper considers two load balancing operations: Balance and MultiBalance. The Balance operation corresponds to the token distribution problem considered by Peleg and Upfal [9] for certain expander networks. The MultiBalance operation balances several populations of distinct token types simultaneously. Efficient implementations of these operations will be given for the hypercube and shuffle-exchange, along with tight or near-tight lower bounds.
CS-TR-89-1281

Report Number: CS-TR-89-1286
Institution: Stanford University, Department of Computer Science
Title: Fast sparse matrix factorization on modern workstations
Author: Rothberg, Edward
Author: Gupta, Anoop
Date: October 1989
Abstract: The performance of workstation-class machines has experienced a dramatic increase in the recent past. Relatively inexpensive machines which offer 14 MIPS and 2 MFLOPS performance are now available, and machines with even higher performance are not far off. One important characteristic of these machines is that they rely on a small amount of high-speed cache memory for their high performance. In this paper, we consider the problem of Cholesky factorization of a large sparse positive definite system of equations on a high performance workstation. We find that the major factor limiting performance is the cost of moving data between memory and the processor. We use two techniques to address this limitation; we decrease the number of memory references and we improve cache behavior to decrease the cost of each reference. When run on benchmarks from the Harwell-Boeing Sparse Matrix Collection, the resulting factorization code is almost three times as fast as SPARSPAK on a DECStation 3100. We believe that the issues brought up in this paper will play an important role in the effective use of high performance workstations on large numerical problems.
CS-TR-89-1286

Report Number: CS-TR-89-1290
Institution: Stanford University, Department of Computer Science
Title: Reading list for the Qualifying Examination in Artificial Intelligence
Author: Myers, Karen
Author: Subramanian, Devika
Author: Z abih, Ramin
Date: November 1989
Abstract: This report contains the reading list for the Qualifying Examination in Artificial Intelligence. Areas covered include search, representation, reasoning, planning and problem solving, learning, expert systems, vision, robotics, natural language, perspectives and AI programming. An extensive bibliography is also provided.
CS-TR-89-1290

Report Number: CS-TR-89-1296
Institution: Stanford University, Department of Computer Science
Title: Completing the temporal picture
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: December 1989
Abstract: The paper presents a relatively complete proof system for proving the validity of temporal properties of reactive programs. The presented proof system improves oll previous temporal systems, such as [MP83a] and [MP83b], in that it reduces the validity of program properties into pure assertional reasoning, not involving additional temporal reasoning. The proof system is based on the classification of temporal properties according to the Borel hierarchy, providing an appropriate proof rule for each of the main classes, such as safety, response, and progress properties.
CS-TR-89-1296

Report Number: CS-TR-89-1288
Institution: Stanford University, Department of Computer Science
Title: Programming and proving with function and control abstractions
Author: Talcott, Carolyn
Date: October 1989
Abstract: Rum is an intensional semantic theory of function and control abstractions as computation primitives. It is a mathematical foundation for understanding and improving current practice in symbolic (Lisp-style) computation. The theory provides, in a single context, a variety of semantics ranging from structures and rules for carrying out computations to an interpretation as functions on the computation domain. Properties of powerful programming tools such as functions as values, streams, aspects of object oriented programming, escape mechanisms, and coroutines can be represented naturally. In addition a wide variety of operations on programs can be treated including program transformations which introduce function and control abstractions, compiling morphisms that transform control abstractions into function abstractions, and operations that transform intensional properties of programs into extensional properties. The theory goes beyond a theory of functions computed by programs, providing tools for treating both intensional and extensional properties of programs. This provides operations on programs with meanings to transform as well as meanings to preserve. Applications of this theory include expressing and proving properties of particular programs and of classes of programs and studying mathematical properties of computation mechanisms. Additional applications are the design and implementation of interactive computation systems and the mechanization of reasoning about computation. These notes are based on lectures given at the Western Institute of Computer Science summer program, 31 July - 1 August 1986. Here we focus on programming and proving with function and control abstractions and present a variety of example programs, properties, and techniques for proving these properties.
CS-TR-89-1288